00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 601 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3266 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.028 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.062 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.081 Using shallow fetch with depth 1 00:00:00.081 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.081 > git --version # timeout=10 00:00:00.106 > git --version # 'git version 2.39.2' 00:00:00.106 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.152 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.152 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.679 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.689 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.700 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.700 > git config core.sparsecheckout # timeout=10 00:00:03.710 > git read-tree -mu HEAD # timeout=10 00:00:03.726 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.747 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.747 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:03.820 [Pipeline] Start of Pipeline 00:00:03.835 [Pipeline] library 00:00:03.837 Loading library shm_lib@master 00:00:03.837 Library shm_lib@master is cached. Copying from home. 00:00:03.855 [Pipeline] node 00:00:03.863 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.866 [Pipeline] { 00:00:03.877 [Pipeline] catchError 00:00:03.878 [Pipeline] { 00:00:03.891 [Pipeline] wrap 00:00:03.901 [Pipeline] { 00:00:03.907 [Pipeline] stage 00:00:03.909 [Pipeline] { (Prologue) 00:00:04.074 [Pipeline] sh 00:00:04.353 + logger -p user.info -t JENKINS-CI 00:00:04.369 [Pipeline] echo 00:00:04.371 Node: GP11 00:00:04.379 [Pipeline] sh 00:00:04.677 [Pipeline] setCustomBuildProperty 00:00:04.686 [Pipeline] echo 00:00:04.687 Cleanup processes 00:00:04.691 [Pipeline] sh 00:00:04.971 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.971 1792827 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.982 [Pipeline] sh 00:00:05.262 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.262 ++ grep -v 'sudo pgrep' 00:00:05.262 ++ awk '{print $1}' 00:00:05.262 + sudo kill -9 00:00:05.262 + true 00:00:05.276 [Pipeline] cleanWs 00:00:05.286 [WS-CLEANUP] Deleting project workspace... 00:00:05.286 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.293 [WS-CLEANUP] done 00:00:05.297 [Pipeline] setCustomBuildProperty 00:00:05.312 [Pipeline] sh 00:00:05.589 + sudo git config --global --replace-all safe.directory '*' 00:00:05.654 [Pipeline] httpRequest 00:00:05.686 [Pipeline] echo 00:00:05.687 Sorcerer 10.211.164.101 is alive 00:00:05.695 [Pipeline] httpRequest 00:00:05.700 HttpMethod: GET 00:00:05.700 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.701 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.719 Response Code: HTTP/1.1 200 OK 00:00:05.720 Success: Status code 200 is in the accepted range: 200,404 00:00:05.720 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.186 [Pipeline] sh 00:00:12.467 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:12.481 [Pipeline] httpRequest 00:00:12.521 [Pipeline] echo 00:00:12.522 Sorcerer 10.211.164.101 is alive 00:00:12.530 [Pipeline] httpRequest 00:00:12.534 HttpMethod: GET 00:00:12.535 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.535 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.552 Response Code: HTTP/1.1 200 OK 00:00:12.553 Success: Status code 200 is in the accepted range: 200,404 00:00:12.553 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:00.956 [Pipeline] sh 00:01:01.244 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:03.795 [Pipeline] sh 00:01:04.081 + git -C spdk log --oneline -n5 00:01:04.081 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:04.081 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:04.081 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:04.081 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:04.081 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:04.103 [Pipeline] withCredentials 00:01:04.115 > git --version # timeout=10 00:01:04.129 > git --version # 'git version 2.39.2' 00:01:04.149 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:04.152 [Pipeline] { 00:01:04.161 [Pipeline] retry 00:01:04.163 [Pipeline] { 00:01:04.180 [Pipeline] sh 00:01:04.463 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:06.389 [Pipeline] } 00:01:06.413 [Pipeline] // retry 00:01:06.420 [Pipeline] } 00:01:06.443 [Pipeline] // withCredentials 00:01:06.454 [Pipeline] httpRequest 00:01:06.482 [Pipeline] echo 00:01:06.484 Sorcerer 10.211.164.101 is alive 00:01:06.493 [Pipeline] httpRequest 00:01:06.498 HttpMethod: GET 00:01:06.498 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:06.499 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:06.506 Response Code: HTTP/1.1 200 OK 00:01:06.506 Success: Status code 200 is in the accepted range: 200,404 00:01:06.507 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:19.932 [Pipeline] sh 00:01:20.219 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:22.138 [Pipeline] sh 00:01:22.424 + git -C dpdk log --oneline -n5 00:01:22.424 caf0f5d395 version: 22.11.4 00:01:22.424 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:22.424 dc9c799c7d vhost: fix missing spinlock unlock 00:01:22.424 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:22.424 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:22.437 [Pipeline] } 00:01:22.455 [Pipeline] // stage 00:01:22.464 [Pipeline] stage 00:01:22.467 [Pipeline] { (Prepare) 00:01:22.488 [Pipeline] writeFile 00:01:22.505 [Pipeline] sh 00:01:22.790 + logger -p user.info -t JENKINS-CI 00:01:22.803 [Pipeline] sh 00:01:23.091 + logger -p user.info -t JENKINS-CI 00:01:23.103 [Pipeline] sh 00:01:23.388 + cat autorun-spdk.conf 00:01:23.388 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.388 SPDK_TEST_NVMF=1 00:01:23.388 SPDK_TEST_NVME_CLI=1 00:01:23.388 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.388 SPDK_TEST_NVMF_NICS=e810 00:01:23.388 SPDK_TEST_VFIOUSER=1 00:01:23.388 SPDK_RUN_UBSAN=1 00:01:23.388 NET_TYPE=phy 00:01:23.388 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.388 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.396 RUN_NIGHTLY=1 00:01:23.401 [Pipeline] readFile 00:01:23.429 [Pipeline] withEnv 00:01:23.431 [Pipeline] { 00:01:23.445 [Pipeline] sh 00:01:23.730 + set -ex 00:01:23.730 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:23.730 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:23.730 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:23.730 ++ SPDK_TEST_NVMF=1 00:01:23.730 ++ SPDK_TEST_NVME_CLI=1 00:01:23.730 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:23.730 ++ SPDK_TEST_NVMF_NICS=e810 00:01:23.730 ++ SPDK_TEST_VFIOUSER=1 00:01:23.730 ++ SPDK_RUN_UBSAN=1 00:01:23.730 ++ NET_TYPE=phy 00:01:23.730 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:23.730 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:23.730 ++ RUN_NIGHTLY=1 00:01:23.730 + case $SPDK_TEST_NVMF_NICS in 00:01:23.730 + DRIVERS=ice 00:01:23.730 + [[ tcp == \r\d\m\a ]] 00:01:23.730 + [[ -n ice ]] 00:01:23.730 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:23.730 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:23.730 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:23.730 rmmod: ERROR: Module irdma is not currently loaded 00:01:23.730 rmmod: ERROR: Module i40iw is not currently loaded 00:01:23.730 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:23.730 + true 00:01:23.730 + for D in $DRIVERS 00:01:23.730 + sudo modprobe ice 00:01:23.730 + exit 0 00:01:23.739 [Pipeline] } 00:01:23.757 [Pipeline] // withEnv 00:01:23.762 [Pipeline] } 00:01:23.778 [Pipeline] // stage 00:01:23.786 [Pipeline] catchError 00:01:23.788 [Pipeline] { 00:01:23.803 [Pipeline] timeout 00:01:23.803 Timeout set to expire in 50 min 00:01:23.805 [Pipeline] { 00:01:23.820 [Pipeline] stage 00:01:23.822 [Pipeline] { (Tests) 00:01:23.836 [Pipeline] sh 00:01:24.120 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.120 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.120 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.120 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:24.120 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:24.120 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.120 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:24.120 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.120 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:24.120 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:24.120 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:24.120 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:24.120 + source /etc/os-release 00:01:24.120 ++ NAME='Fedora Linux' 00:01:24.120 ++ VERSION='38 (Cloud Edition)' 00:01:24.120 ++ ID=fedora 00:01:24.120 ++ VERSION_ID=38 00:01:24.120 ++ VERSION_CODENAME= 00:01:24.120 ++ PLATFORM_ID=platform:f38 00:01:24.120 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:24.120 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:24.120 ++ LOGO=fedora-logo-icon 00:01:24.120 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:24.120 ++ HOME_URL=https://fedoraproject.org/ 00:01:24.120 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:24.120 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:24.120 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:24.120 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:24.120 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:24.120 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:24.120 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:24.120 ++ SUPPORT_END=2024-05-14 00:01:24.120 ++ VARIANT='Cloud Edition' 00:01:24.120 ++ VARIANT_ID=cloud 00:01:24.120 + uname -a 00:01:24.120 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:24.120 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:25.084 Hugepages 00:01:25.084 node hugesize free / total 00:01:25.084 node0 1048576kB 0 / 0 00:01:25.084 node0 2048kB 0 / 0 00:01:25.084 node1 1048576kB 0 / 0 00:01:25.084 node1 2048kB 0 / 0 00:01:25.084 00:01:25.084 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:25.084 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:25.084 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:25.084 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:25.084 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:25.084 + rm -f /tmp/spdk-ld-path 00:01:25.084 + source autorun-spdk.conf 00:01:25.084 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.084 ++ SPDK_TEST_NVMF=1 00:01:25.084 ++ SPDK_TEST_NVME_CLI=1 00:01:25.084 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.084 ++ SPDK_TEST_NVMF_NICS=e810 00:01:25.084 ++ SPDK_TEST_VFIOUSER=1 00:01:25.084 ++ SPDK_RUN_UBSAN=1 00:01:25.084 ++ NET_TYPE=phy 00:01:25.084 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:25.084 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.084 ++ RUN_NIGHTLY=1 00:01:25.084 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:25.084 + [[ -n '' ]] 00:01:25.084 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.084 + for M in /var/spdk/build-*-manifest.txt 00:01:25.084 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:25.084 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.084 + for M in /var/spdk/build-*-manifest.txt 00:01:25.084 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:25.084 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:25.084 ++ uname 00:01:25.084 + [[ Linux == \L\i\n\u\x ]] 00:01:25.084 + sudo dmesg -T 00:01:25.084 + sudo dmesg --clear 00:01:25.084 + dmesg_pid=1794156 00:01:25.084 + [[ Fedora Linux == FreeBSD ]] 00:01:25.084 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.084 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:25.084 + sudo dmesg -Tw 00:01:25.084 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:25.084 + [[ -x /usr/src/fio-static/fio ]] 00:01:25.084 + export FIO_BIN=/usr/src/fio-static/fio 00:01:25.084 + FIO_BIN=/usr/src/fio-static/fio 00:01:25.084 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:25.084 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:25.084 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:25.084 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.084 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:25.084 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:25.084 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.084 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:25.084 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:25.084 Test configuration: 00:01:25.084 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.084 SPDK_TEST_NVMF=1 00:01:25.084 SPDK_TEST_NVME_CLI=1 00:01:25.084 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:25.084 SPDK_TEST_NVMF_NICS=e810 00:01:25.084 SPDK_TEST_VFIOUSER=1 00:01:25.084 SPDK_RUN_UBSAN=1 00:01:25.084 NET_TYPE=phy 00:01:25.084 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:25.084 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.343 RUN_NIGHTLY=1 02:49:20 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:25.343 02:49:20 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:25.343 02:49:20 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:25.343 02:49:20 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:25.343 02:49:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.343 02:49:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.343 02:49:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.343 02:49:20 -- paths/export.sh@5 -- $ export PATH 00:01:25.343 02:49:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:25.343 02:49:20 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:25.343 02:49:20 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:25.343 02:49:20 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720918160.XXXXXX 00:01:25.343 02:49:20 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720918160.AkjSYq 00:01:25.343 02:49:20 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:25.343 02:49:20 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:01:25.343 02:49:20 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.343 02:49:20 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:25.344 02:49:20 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:25.344 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.344 02:49:20 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:25.344 02:49:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:25.344 02:49:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:25.344 02:49:20 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.344 02:49:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:25.344 Sun Jul 14 12:49:20 AM UTC 2024 00:01:25.344 02:49:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:25.344 LTS-59-g4b94202c6 00:01:25.344 02:49:20 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:25.344 02:49:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:25.344 02:49:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:25.344 02:49:20 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:25.344 02:49:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:25.344 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.344 ************************************ 00:01:25.344 START TEST ubsan 00:01:25.344 ************************************ 00:01:25.344 02:49:20 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:25.344 using ubsan 00:01:25.344 00:01:25.344 real 0m0.000s 00:01:25.344 user 0m0.000s 00:01:25.344 sys 0m0.000s 00:01:25.344 02:49:20 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:25.344 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.344 ************************************ 00:01:25.344 END TEST ubsan 00:01:25.344 ************************************ 00:01:25.344 02:49:20 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:25.344 02:49:20 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:25.344 02:49:20 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:25.344 02:49:20 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:25.344 02:49:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:25.344 ************************************ 00:01:25.344 START TEST build_native_dpdk 00:01:25.344 ************************************ 00:01:25.344 02:49:20 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:25.344 02:49:20 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:25.344 02:49:20 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:25.344 02:49:20 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:25.344 02:49:20 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:25.344 02:49:20 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:25.344 02:49:20 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:25.344 02:49:20 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:25.344 02:49:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:25.344 02:49:20 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:25.344 02:49:20 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:25.344 02:49:20 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.344 02:49:20 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:25.344 02:49:20 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:25.344 caf0f5d395 version: 22.11.4 00:01:25.344 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:25.344 dc9c799c7d vhost: fix missing spinlock unlock 00:01:25.344 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:25.344 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:25.344 02:49:20 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:25.344 02:49:20 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:25.344 02:49:20 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:25.344 02:49:20 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:25.344 02:49:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:25.344 02:49:20 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:25.344 02:49:20 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:25.344 02:49:20 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:25.344 02:49:20 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:25.344 02:49:20 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:25.344 02:49:20 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:25.344 02:49:20 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:25.344 02:49:20 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:25.344 02:49:20 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:25.344 02:49:20 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:25.344 02:49:20 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:25.344 02:49:20 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:25.344 02:49:20 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:25.344 02:49:20 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:25.344 02:49:20 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:25.344 02:49:20 -- scripts/common.sh@343 -- $ case "$op" in 00:01:25.344 02:49:20 -- scripts/common.sh@344 -- $ : 1 00:01:25.344 02:49:20 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:25.344 02:49:20 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:25.344 02:49:20 -- scripts/common.sh@364 -- $ decimal 22 00:01:25.344 02:49:20 -- scripts/common.sh@352 -- $ local d=22 00:01:25.344 02:49:20 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:25.344 02:49:20 -- scripts/common.sh@354 -- $ echo 22 00:01:25.344 02:49:20 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:25.344 02:49:20 -- scripts/common.sh@365 -- $ decimal 21 00:01:25.344 02:49:20 -- scripts/common.sh@352 -- $ local d=21 00:01:25.344 02:49:20 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:25.344 02:49:20 -- scripts/common.sh@354 -- $ echo 21 00:01:25.344 02:49:20 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:25.344 02:49:20 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:25.344 02:49:20 -- scripts/common.sh@366 -- $ return 1 00:01:25.344 02:49:20 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:25.344 patching file config/rte_config.h 00:01:25.345 Hunk #1 succeeded at 60 (offset 1 line). 00:01:25.345 02:49:20 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:25.345 02:49:20 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:25.345 02:49:20 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:25.345 02:49:20 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:25.345 02:49:20 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:29.528 The Meson build system 00:01:29.528 Version: 1.3.1 00:01:29.528 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:29.528 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:29.528 Build type: native build 00:01:29.528 Program cat found: YES (/usr/bin/cat) 00:01:29.528 Project name: DPDK 00:01:29.528 Project version: 22.11.4 00:01:29.528 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:29.528 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:29.528 Host machine cpu family: x86_64 00:01:29.528 Host machine cpu: x86_64 00:01:29.528 Message: ## Building in Developer Mode ## 00:01:29.528 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:29.528 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:29.528 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:29.528 Program objdump found: YES (/usr/bin/objdump) 00:01:29.528 Program python3 found: YES (/usr/bin/python3) 00:01:29.528 Program cat found: YES (/usr/bin/cat) 00:01:29.528 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:29.528 Checking for size of "void *" : 8 00:01:29.528 Checking for size of "void *" : 8 (cached) 00:01:29.528 Library m found: YES 00:01:29.528 Library numa found: YES 00:01:29.528 Has header "numaif.h" : YES 00:01:29.528 Library fdt found: NO 00:01:29.528 Library execinfo found: NO 00:01:29.528 Has header "execinfo.h" : YES 00:01:29.528 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:29.528 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:29.528 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:29.528 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:29.528 Run-time dependency openssl found: YES 3.0.9 00:01:29.528 Run-time dependency libpcap found: YES 1.10.4 00:01:29.528 Has header "pcap.h" with dependency libpcap: YES 00:01:29.528 Compiler for C supports arguments -Wcast-qual: YES 00:01:29.528 Compiler for C supports arguments -Wdeprecated: YES 00:01:29.528 Compiler for C supports arguments -Wformat: YES 00:01:29.528 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:29.528 Compiler for C supports arguments -Wformat-security: NO 00:01:29.528 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:29.528 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:29.528 Compiler for C supports arguments -Wnested-externs: YES 00:01:29.528 Compiler for C supports arguments -Wold-style-definition: YES 00:01:29.528 Compiler for C supports arguments -Wpointer-arith: YES 00:01:29.528 Compiler for C supports arguments -Wsign-compare: YES 00:01:29.528 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:29.528 Compiler for C supports arguments -Wundef: YES 00:01:29.528 Compiler for C supports arguments -Wwrite-strings: YES 00:01:29.528 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:29.528 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:29.528 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:29.528 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:29.528 Compiler for C supports arguments -mavx512f: YES 00:01:29.528 Checking if "AVX512 checking" compiles: YES 00:01:29.528 Fetching value of define "__SSE4_2__" : 1 00:01:29.528 Fetching value of define "__AES__" : 1 00:01:29.528 Fetching value of define "__AVX__" : 1 00:01:29.528 Fetching value of define "__AVX2__" : (undefined) 00:01:29.528 Fetching value of define "__AVX512BW__" : (undefined) 00:01:29.528 Fetching value of define "__AVX512CD__" : (undefined) 00:01:29.528 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:29.528 Fetching value of define "__AVX512F__" : (undefined) 00:01:29.528 Fetching value of define "__AVX512VL__" : (undefined) 00:01:29.528 Fetching value of define "__PCLMUL__" : 1 00:01:29.528 Fetching value of define "__RDRND__" : 1 00:01:29.528 Fetching value of define "__RDSEED__" : (undefined) 00:01:29.528 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:29.528 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:29.528 Message: lib/kvargs: Defining dependency "kvargs" 00:01:29.528 Message: lib/telemetry: Defining dependency "telemetry" 00:01:29.528 Checking for function "getentropy" : YES 00:01:29.528 Message: lib/eal: Defining dependency "eal" 00:01:29.528 Message: lib/ring: Defining dependency "ring" 00:01:29.528 Message: lib/rcu: Defining dependency "rcu" 00:01:29.528 Message: lib/mempool: Defining dependency "mempool" 00:01:29.528 Message: lib/mbuf: Defining dependency "mbuf" 00:01:29.528 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:29.528 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.528 Compiler for C supports arguments -mpclmul: YES 00:01:29.528 Compiler for C supports arguments -maes: YES 00:01:29.528 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:29.528 Compiler for C supports arguments -mavx512bw: YES 00:01:29.528 Compiler for C supports arguments -mavx512dq: YES 00:01:29.528 Compiler for C supports arguments -mavx512vl: YES 00:01:29.528 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:29.528 Compiler for C supports arguments -mavx2: YES 00:01:29.528 Compiler for C supports arguments -mavx: YES 00:01:29.528 Message: lib/net: Defining dependency "net" 00:01:29.528 Message: lib/meter: Defining dependency "meter" 00:01:29.528 Message: lib/ethdev: Defining dependency "ethdev" 00:01:29.528 Message: lib/pci: Defining dependency "pci" 00:01:29.528 Message: lib/cmdline: Defining dependency "cmdline" 00:01:29.528 Message: lib/metrics: Defining dependency "metrics" 00:01:29.528 Message: lib/hash: Defining dependency "hash" 00:01:29.528 Message: lib/timer: Defining dependency "timer" 00:01:29.528 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:29.528 Compiler for C supports arguments -mavx2: YES (cached) 00:01:29.528 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.528 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:29.528 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:29.528 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:29.528 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:29.528 Message: lib/acl: Defining dependency "acl" 00:01:29.528 Message: lib/bbdev: Defining dependency "bbdev" 00:01:29.528 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:29.528 Run-time dependency libelf found: YES 0.190 00:01:29.528 Message: lib/bpf: Defining dependency "bpf" 00:01:29.528 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:29.528 Message: lib/compressdev: Defining dependency "compressdev" 00:01:29.528 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:29.528 Message: lib/distributor: Defining dependency "distributor" 00:01:29.528 Message: lib/efd: Defining dependency "efd" 00:01:29.528 Message: lib/eventdev: Defining dependency "eventdev" 00:01:29.528 Message: lib/gpudev: Defining dependency "gpudev" 00:01:29.528 Message: lib/gro: Defining dependency "gro" 00:01:29.528 Message: lib/gso: Defining dependency "gso" 00:01:29.528 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:29.528 Message: lib/jobstats: Defining dependency "jobstats" 00:01:29.528 Message: lib/latencystats: Defining dependency "latencystats" 00:01:29.528 Message: lib/lpm: Defining dependency "lpm" 00:01:29.528 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.528 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.528 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:29.528 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:29.528 Message: lib/member: Defining dependency "member" 00:01:29.528 Message: lib/pcapng: Defining dependency "pcapng" 00:01:29.528 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:29.528 Message: lib/power: Defining dependency "power" 00:01:29.528 Message: lib/rawdev: Defining dependency "rawdev" 00:01:29.528 Message: lib/regexdev: Defining dependency "regexdev" 00:01:29.528 Message: lib/dmadev: Defining dependency "dmadev" 00:01:29.528 Message: lib/rib: Defining dependency "rib" 00:01:29.528 Message: lib/reorder: Defining dependency "reorder" 00:01:29.528 Message: lib/sched: Defining dependency "sched" 00:01:29.528 Message: lib/security: Defining dependency "security" 00:01:29.528 Message: lib/stack: Defining dependency "stack" 00:01:29.529 Has header "linux/userfaultfd.h" : YES 00:01:29.529 Message: lib/vhost: Defining dependency "vhost" 00:01:29.529 Message: lib/ipsec: Defining dependency "ipsec" 00:01:29.529 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:29.529 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:29.529 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:29.529 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:29.529 Message: lib/fib: Defining dependency "fib" 00:01:29.529 Message: lib/port: Defining dependency "port" 00:01:29.529 Message: lib/pdump: Defining dependency "pdump" 00:01:29.529 Message: lib/table: Defining dependency "table" 00:01:29.529 Message: lib/pipeline: Defining dependency "pipeline" 00:01:29.529 Message: lib/graph: Defining dependency "graph" 00:01:29.529 Message: lib/node: Defining dependency "node" 00:01:29.529 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:29.529 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:29.529 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:29.529 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:29.529 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:29.529 Compiler for C supports arguments -Wno-unused-value: YES 00:01:30.906 Compiler for C supports arguments -Wno-format: YES 00:01:30.906 Compiler for C supports arguments -Wno-format-security: YES 00:01:30.906 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:30.906 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:30.906 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:30.906 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:30.906 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:30.906 Compiler for C supports arguments -mavx2: YES (cached) 00:01:30.906 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:30.906 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.906 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:30.906 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:30.906 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:30.906 Program doxygen found: YES (/usr/bin/doxygen) 00:01:30.906 Configuring doxy-api.conf using configuration 00:01:30.906 Program sphinx-build found: NO 00:01:30.906 Configuring rte_build_config.h using configuration 00:01:30.906 Message: 00:01:30.906 ================= 00:01:30.906 Applications Enabled 00:01:30.906 ================= 00:01:30.906 00:01:30.906 apps: 00:01:30.906 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:30.906 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:30.906 test-security-perf, 00:01:30.906 00:01:30.906 Message: 00:01:30.906 ================= 00:01:30.906 Libraries Enabled 00:01:30.906 ================= 00:01:30.906 00:01:30.906 libs: 00:01:30.906 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:30.906 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:30.906 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:30.906 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:30.906 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:30.906 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:30.906 table, pipeline, graph, node, 00:01:30.906 00:01:30.906 Message: 00:01:30.906 =============== 00:01:30.906 Drivers Enabled 00:01:30.906 =============== 00:01:30.906 00:01:30.906 common: 00:01:30.906 00:01:30.906 bus: 00:01:30.906 pci, vdev, 00:01:30.906 mempool: 00:01:30.906 ring, 00:01:30.906 dma: 00:01:30.906 00:01:30.906 net: 00:01:30.906 i40e, 00:01:30.906 raw: 00:01:30.906 00:01:30.906 crypto: 00:01:30.906 00:01:30.906 compress: 00:01:30.906 00:01:30.906 regex: 00:01:30.906 00:01:30.906 vdpa: 00:01:30.906 00:01:30.906 event: 00:01:30.906 00:01:30.906 baseband: 00:01:30.906 00:01:30.906 gpu: 00:01:30.906 00:01:30.906 00:01:30.906 Message: 00:01:30.906 ================= 00:01:30.906 Content Skipped 00:01:30.906 ================= 00:01:30.906 00:01:30.906 apps: 00:01:30.906 00:01:30.906 libs: 00:01:30.906 kni: explicitly disabled via build config (deprecated lib) 00:01:30.906 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:30.906 00:01:30.906 drivers: 00:01:30.906 common/cpt: not in enabled drivers build config 00:01:30.906 common/dpaax: not in enabled drivers build config 00:01:30.906 common/iavf: not in enabled drivers build config 00:01:30.906 common/idpf: not in enabled drivers build config 00:01:30.906 common/mvep: not in enabled drivers build config 00:01:30.906 common/octeontx: not in enabled drivers build config 00:01:30.906 bus/auxiliary: not in enabled drivers build config 00:01:30.906 bus/dpaa: not in enabled drivers build config 00:01:30.906 bus/fslmc: not in enabled drivers build config 00:01:30.906 bus/ifpga: not in enabled drivers build config 00:01:30.906 bus/vmbus: not in enabled drivers build config 00:01:30.906 common/cnxk: not in enabled drivers build config 00:01:30.906 common/mlx5: not in enabled drivers build config 00:01:30.906 common/qat: not in enabled drivers build config 00:01:30.906 common/sfc_efx: not in enabled drivers build config 00:01:30.906 mempool/bucket: not in enabled drivers build config 00:01:30.906 mempool/cnxk: not in enabled drivers build config 00:01:30.906 mempool/dpaa: not in enabled drivers build config 00:01:30.906 mempool/dpaa2: not in enabled drivers build config 00:01:30.906 mempool/octeontx: not in enabled drivers build config 00:01:30.906 mempool/stack: not in enabled drivers build config 00:01:30.906 dma/cnxk: not in enabled drivers build config 00:01:30.906 dma/dpaa: not in enabled drivers build config 00:01:30.906 dma/dpaa2: not in enabled drivers build config 00:01:30.906 dma/hisilicon: not in enabled drivers build config 00:01:30.906 dma/idxd: not in enabled drivers build config 00:01:30.906 dma/ioat: not in enabled drivers build config 00:01:30.906 dma/skeleton: not in enabled drivers build config 00:01:30.906 net/af_packet: not in enabled drivers build config 00:01:30.906 net/af_xdp: not in enabled drivers build config 00:01:30.906 net/ark: not in enabled drivers build config 00:01:30.906 net/atlantic: not in enabled drivers build config 00:01:30.906 net/avp: not in enabled drivers build config 00:01:30.906 net/axgbe: not in enabled drivers build config 00:01:30.906 net/bnx2x: not in enabled drivers build config 00:01:30.906 net/bnxt: not in enabled drivers build config 00:01:30.906 net/bonding: not in enabled drivers build config 00:01:30.906 net/cnxk: not in enabled drivers build config 00:01:30.906 net/cxgbe: not in enabled drivers build config 00:01:30.906 net/dpaa: not in enabled drivers build config 00:01:30.906 net/dpaa2: not in enabled drivers build config 00:01:30.906 net/e1000: not in enabled drivers build config 00:01:30.906 net/ena: not in enabled drivers build config 00:01:30.906 net/enetc: not in enabled drivers build config 00:01:30.906 net/enetfec: not in enabled drivers build config 00:01:30.906 net/enic: not in enabled drivers build config 00:01:30.906 net/failsafe: not in enabled drivers build config 00:01:30.906 net/fm10k: not in enabled drivers build config 00:01:30.906 net/gve: not in enabled drivers build config 00:01:30.906 net/hinic: not in enabled drivers build config 00:01:30.906 net/hns3: not in enabled drivers build config 00:01:30.906 net/iavf: not in enabled drivers build config 00:01:30.907 net/ice: not in enabled drivers build config 00:01:30.907 net/idpf: not in enabled drivers build config 00:01:30.907 net/igc: not in enabled drivers build config 00:01:30.907 net/ionic: not in enabled drivers build config 00:01:30.907 net/ipn3ke: not in enabled drivers build config 00:01:30.907 net/ixgbe: not in enabled drivers build config 00:01:30.907 net/kni: not in enabled drivers build config 00:01:30.907 net/liquidio: not in enabled drivers build config 00:01:30.907 net/mana: not in enabled drivers build config 00:01:30.907 net/memif: not in enabled drivers build config 00:01:30.907 net/mlx4: not in enabled drivers build config 00:01:30.907 net/mlx5: not in enabled drivers build config 00:01:30.907 net/mvneta: not in enabled drivers build config 00:01:30.907 net/mvpp2: not in enabled drivers build config 00:01:30.907 net/netvsc: not in enabled drivers build config 00:01:30.907 net/nfb: not in enabled drivers build config 00:01:30.907 net/nfp: not in enabled drivers build config 00:01:30.907 net/ngbe: not in enabled drivers build config 00:01:30.907 net/null: not in enabled drivers build config 00:01:30.907 net/octeontx: not in enabled drivers build config 00:01:30.907 net/octeon_ep: not in enabled drivers build config 00:01:30.907 net/pcap: not in enabled drivers build config 00:01:30.907 net/pfe: not in enabled drivers build config 00:01:30.907 net/qede: not in enabled drivers build config 00:01:30.907 net/ring: not in enabled drivers build config 00:01:30.907 net/sfc: not in enabled drivers build config 00:01:30.907 net/softnic: not in enabled drivers build config 00:01:30.907 net/tap: not in enabled drivers build config 00:01:30.907 net/thunderx: not in enabled drivers build config 00:01:30.907 net/txgbe: not in enabled drivers build config 00:01:30.907 net/vdev_netvsc: not in enabled drivers build config 00:01:30.907 net/vhost: not in enabled drivers build config 00:01:30.907 net/virtio: not in enabled drivers build config 00:01:30.907 net/vmxnet3: not in enabled drivers build config 00:01:30.907 raw/cnxk_bphy: not in enabled drivers build config 00:01:30.907 raw/cnxk_gpio: not in enabled drivers build config 00:01:30.907 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:30.907 raw/ifpga: not in enabled drivers build config 00:01:30.907 raw/ntb: not in enabled drivers build config 00:01:30.907 raw/skeleton: not in enabled drivers build config 00:01:30.907 crypto/armv8: not in enabled drivers build config 00:01:30.907 crypto/bcmfs: not in enabled drivers build config 00:01:30.907 crypto/caam_jr: not in enabled drivers build config 00:01:30.907 crypto/ccp: not in enabled drivers build config 00:01:30.907 crypto/cnxk: not in enabled drivers build config 00:01:30.907 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.907 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.907 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.907 crypto/mlx5: not in enabled drivers build config 00:01:30.907 crypto/mvsam: not in enabled drivers build config 00:01:30.907 crypto/nitrox: not in enabled drivers build config 00:01:30.907 crypto/null: not in enabled drivers build config 00:01:30.907 crypto/octeontx: not in enabled drivers build config 00:01:30.907 crypto/openssl: not in enabled drivers build config 00:01:30.907 crypto/scheduler: not in enabled drivers build config 00:01:30.907 crypto/uadk: not in enabled drivers build config 00:01:30.907 crypto/virtio: not in enabled drivers build config 00:01:30.907 compress/isal: not in enabled drivers build config 00:01:30.907 compress/mlx5: not in enabled drivers build config 00:01:30.907 compress/octeontx: not in enabled drivers build config 00:01:30.907 compress/zlib: not in enabled drivers build config 00:01:30.907 regex/mlx5: not in enabled drivers build config 00:01:30.907 regex/cn9k: not in enabled drivers build config 00:01:30.907 vdpa/ifc: not in enabled drivers build config 00:01:30.907 vdpa/mlx5: not in enabled drivers build config 00:01:30.907 vdpa/sfc: not in enabled drivers build config 00:01:30.907 event/cnxk: not in enabled drivers build config 00:01:30.907 event/dlb2: not in enabled drivers build config 00:01:30.907 event/dpaa: not in enabled drivers build config 00:01:30.907 event/dpaa2: not in enabled drivers build config 00:01:30.907 event/dsw: not in enabled drivers build config 00:01:30.907 event/opdl: not in enabled drivers build config 00:01:30.907 event/skeleton: not in enabled drivers build config 00:01:30.907 event/sw: not in enabled drivers build config 00:01:30.907 event/octeontx: not in enabled drivers build config 00:01:30.907 baseband/acc: not in enabled drivers build config 00:01:30.907 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:30.907 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:30.907 baseband/la12xx: not in enabled drivers build config 00:01:30.907 baseband/null: not in enabled drivers build config 00:01:30.907 baseband/turbo_sw: not in enabled drivers build config 00:01:30.907 gpu/cuda: not in enabled drivers build config 00:01:30.907 00:01:30.907 00:01:30.907 Build targets in project: 316 00:01:30.907 00:01:30.907 DPDK 22.11.4 00:01:30.907 00:01:30.907 User defined options 00:01:30.907 libdir : lib 00:01:30.907 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:30.907 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:30.907 c_link_args : 00:01:30.907 enable_docs : false 00:01:30.907 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:30.907 enable_kmods : false 00:01:30.907 machine : native 00:01:30.907 tests : false 00:01:30.907 00:01:30.907 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:30.907 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:30.907 02:49:25 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:30.907 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:30.907 [1/745] Generating lib/rte_kvargs_def with a custom command 00:01:30.907 [2/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:30.907 [3/745] Generating lib/rte_telemetry_def with a custom command 00:01:30.907 [4/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:30.907 [5/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:30.907 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:30.907 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:30.907 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:30.907 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:30.907 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:30.907 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:30.907 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.170 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.170 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:31.170 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:31.170 [16/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.170 [17/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.170 [18/745] Linking static target lib/librte_kvargs.a 00:01:31.170 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:31.170 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.170 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.170 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:31.170 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:31.170 [24/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:31.170 [25/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.170 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:31.170 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:31.170 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:31.170 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:31.170 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:31.170 [31/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:31.170 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:31.170 [33/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:31.170 [34/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:31.170 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:31.170 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:31.170 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.170 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:31.170 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:31.170 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.170 [41/745] Generating lib/rte_eal_def with a custom command 00:01:31.170 [42/745] Generating lib/rte_eal_mingw with a custom command 00:01:31.170 [43/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.170 [44/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:31.170 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:31.170 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:31.170 [47/745] Generating lib/rte_ring_mingw with a custom command 00:01:31.170 [48/745] Generating lib/rte_ring_def with a custom command 00:01:31.170 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:31.170 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:31.170 [51/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:31.170 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:31.170 [53/745] Generating lib/rte_rcu_def with a custom command 00:01:31.170 [54/745] Generating lib/rte_rcu_mingw with a custom command 00:01:31.170 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:31.433 [56/745] Generating lib/rte_mempool_def with a custom command 00:01:31.433 [57/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:31.433 [58/745] Generating lib/rte_mempool_mingw with a custom command 00:01:31.433 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:31.433 [60/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:31.433 [61/745] Generating lib/rte_mbuf_def with a custom command 00:01:31.433 [62/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:31.433 [63/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:31.433 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:31.433 [65/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:31.433 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:31.433 [67/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:31.433 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:31.433 [69/745] Generating lib/rte_net_def with a custom command 00:01:31.433 [70/745] Generating lib/rte_net_mingw with a custom command 00:01:31.433 [71/745] Generating lib/rte_meter_def with a custom command 00:01:31.433 [72/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:31.433 [73/745] Generating lib/rte_meter_mingw with a custom command 00:01:31.433 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:31.433 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:31.433 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:31.433 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:31.433 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.433 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:31.433 [80/745] Linking target lib/librte_kvargs.so.23.0 00:01:31.433 [81/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:31.433 [82/745] Linking static target lib/librte_ring.a 00:01:31.433 [83/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:31.433 [84/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:31.693 [85/745] Generating lib/rte_pci_def with a custom command 00:01:31.694 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:31.694 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:31.694 [88/745] Linking static target lib/librte_meter.a 00:01:31.694 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:31.694 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:31.694 [91/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:31.694 [92/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.694 [93/745] Linking static target lib/librte_pci.a 00:01:31.694 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:31.694 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:31.694 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:31.955 [97/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:31.955 [98/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.955 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.955 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.955 [101/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.955 [102/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.955 [103/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:31.955 [104/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:31.955 [105/745] Generating lib/rte_cmdline_def with a custom command 00:01:31.955 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:31.955 [107/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:31.955 [108/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:31.955 [109/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.955 [110/745] Linking static target lib/librte_telemetry.a 00:01:31.955 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:31.955 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:31.955 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.216 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:01:32.216 [115/745] Generating lib/rte_metrics_def with a custom command 00:01:32.216 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:32.216 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.216 [118/745] Generating lib/rte_hash_def with a custom command 00:01:32.216 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:32.216 [120/745] Generating lib/rte_timer_def with a custom command 00:01:32.216 [121/745] Generating lib/rte_timer_mingw with a custom command 00:01:32.216 [122/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.216 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.216 [124/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:32.481 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.481 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.481 [127/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.481 [128/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.481 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.481 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.481 [131/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.481 [132/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.481 [133/745] Generating lib/rte_acl_def with a custom command 00:01:32.481 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:32.481 [135/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.481 [136/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.481 [137/745] Generating lib/rte_bbdev_def with a custom command 00:01:32.481 [138/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:32.481 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:01:32.481 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:32.481 [141/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.481 [142/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.481 [143/745] Linking target lib/librte_telemetry.so.23.0 00:01:32.742 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.742 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.742 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.742 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.742 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.742 [149/745] Generating lib/rte_bpf_def with a custom command 00:01:32.742 [150/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.742 [151/745] Generating lib/rte_bpf_mingw with a custom command 00:01:32.742 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.742 [153/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.742 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:32.742 [155/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.742 [156/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:32.742 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.742 [158/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:32.742 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:33.004 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:33.004 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:33.004 [162/745] Generating lib/rte_cryptodev_def with a custom command 00:01:33.004 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:33.004 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:33.004 [165/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:33.004 [166/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:33.004 [167/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:33.004 [168/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:33.004 [169/745] Linking static target lib/librte_timer.a 00:01:33.004 [170/745] Linking static target lib/librte_rcu.a 00:01:33.004 [171/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:33.004 [172/745] Linking static target lib/librte_cmdline.a 00:01:33.004 [173/745] Generating lib/rte_distributor_def with a custom command 00:01:33.004 [174/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:33.004 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:33.004 [176/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:33.004 [177/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:33.004 [178/745] Linking static target lib/librte_net.a 00:01:33.004 [179/745] Generating lib/rte_efd_def with a custom command 00:01:33.004 [180/745] Generating lib/rte_efd_mingw with a custom command 00:01:33.264 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:33.264 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:33.264 [183/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:33.264 [184/745] Linking static target lib/librte_metrics.a 00:01:33.264 [185/745] Linking static target lib/librte_cfgfile.a 00:01:33.264 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:33.264 [187/745] Linking static target lib/librte_mempool.a 00:01:33.523 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:33.523 [189/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.523 [190/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.523 [191/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.523 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.523 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.523 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:33.523 [195/745] Generating lib/rte_eventdev_def with a custom command 00:01:33.785 [196/745] Linking static target lib/librte_eal.a 00:01:33.785 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:33.785 [198/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:33.785 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:33.785 [200/745] Generating lib/rte_gpudev_def with a custom command 00:01:33.785 [201/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:33.785 [202/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:33.785 [203/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:33.785 [204/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.785 [205/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:33.785 [206/745] Linking static target lib/librte_bitratestats.a 00:01:33.785 [207/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.785 [208/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:33.785 [209/745] Generating lib/rte_gro_def with a custom command 00:01:33.785 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:34.048 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:34.048 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:34.048 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:34.048 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:34.048 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:34.048 [216/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.309 [217/745] Generating lib/rte_gso_def with a custom command 00:01:34.310 [218/745] Generating lib/rte_gso_mingw with a custom command 00:01:34.310 [219/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:34.310 [220/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:34.310 [221/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:34.310 [222/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.310 [223/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:34.310 [224/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:34.310 [225/745] Generating lib/rte_ip_frag_def with a custom command 00:01:34.310 [226/745] Linking static target lib/librte_bbdev.a 00:01:34.310 [227/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:34.310 [228/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.310 [229/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:34.572 [230/745] Generating lib/rte_jobstats_def with a custom command 00:01:34.572 [231/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:34.572 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:34.572 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:34.572 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:34.572 [235/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:34.572 [236/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:34.572 [237/745] Generating lib/rte_lpm_def with a custom command 00:01:34.572 [238/745] Linking static target lib/librte_compressdev.a 00:01:34.572 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:01:34.572 [240/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:34.572 [241/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:34.836 [242/745] Linking static target lib/librte_jobstats.a 00:01:34.836 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:34.836 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.095 [245/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:35.095 [246/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:35.095 [247/745] Linking static target lib/librte_distributor.a 00:01:35.095 [248/745] Generating lib/rte_member_def with a custom command 00:01:35.095 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:35.095 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:35.095 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:35.095 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.095 [253/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:35.095 [254/745] Generating lib/rte_pcapng_def with a custom command 00:01:35.362 [255/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:35.362 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:35.362 [257/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:35.362 [258/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.362 [259/745] Linking static target lib/librte_bpf.a 00:01:35.362 [260/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:35.362 [261/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:35.362 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:35.362 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:35.362 [264/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:35.362 [265/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.362 [266/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:35.362 [267/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:35.362 [268/745] Linking static target lib/librte_gpudev.a 00:01:35.362 [269/745] Generating lib/rte_power_def with a custom command 00:01:35.362 [270/745] Generating lib/rte_power_mingw with a custom command 00:01:35.362 [271/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:35.362 [272/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:35.362 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:35.640 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:35.640 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:35.640 [276/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:35.640 [277/745] Generating lib/rte_regexdev_def with a custom command 00:01:35.640 [278/745] Linking static target lib/librte_gro.a 00:01:35.640 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:35.640 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.640 [281/745] Generating lib/rte_dmadev_def with a custom command 00:01:35.640 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:35.640 [283/745] Generating lib/rte_rib_def with a custom command 00:01:35.640 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:35.640 [285/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:35.640 [286/745] Generating lib/rte_reorder_def with a custom command 00:01:35.920 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:35.920 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:35.920 [289/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.920 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.920 [291/745] Generating lib/rte_sched_def with a custom command 00:01:35.920 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:35.920 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:35.920 [294/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:35.920 [295/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:35.920 [296/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.920 [297/745] Generating lib/rte_sched_mingw with a custom command 00:01:35.920 [298/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:35.920 [299/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:35.920 [300/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:35.920 [301/745] Generating lib/rte_security_def with a custom command 00:01:35.920 [302/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:35.920 [303/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:35.920 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:35.920 [305/745] Linking static target lib/librte_latencystats.a 00:01:35.920 [306/745] Generating lib/rte_security_mingw with a custom command 00:01:36.186 [307/745] Generating lib/rte_stack_def with a custom command 00:01:36.186 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:36.186 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:36.186 [310/745] Linking static target lib/librte_rawdev.a 00:01:36.186 [311/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:36.186 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:36.186 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:36.186 [314/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:36.186 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:36.186 [316/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:36.186 [317/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:36.186 [318/745] Linking static target lib/librte_stack.a 00:01:36.186 [319/745] Generating lib/rte_vhost_def with a custom command 00:01:36.186 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:36.186 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:36.186 [322/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:36.186 [323/745] Linking static target lib/librte_dmadev.a 00:01:36.453 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:36.453 [325/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.453 [326/745] Linking static target lib/librte_ip_frag.a 00:01:36.453 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:36.453 [328/745] Generating lib/rte_ipsec_def with a custom command 00:01:36.453 [329/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:36.453 [330/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:36.453 [331/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.716 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:36.716 [333/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.716 [334/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:36.716 [335/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.716 [336/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.716 [337/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:36.975 [338/745] Generating lib/rte_fib_def with a custom command 00:01:36.975 [339/745] Generating lib/rte_fib_mingw with a custom command 00:01:36.975 [340/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:36.975 [341/745] Linking static target lib/librte_gso.a 00:01:36.975 [342/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:36.975 [343/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.975 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:36.975 [345/745] Linking static target lib/librte_regexdev.a 00:01:36.975 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.241 [347/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:37.241 [348/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.241 [349/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:37.501 [350/745] Linking static target lib/librte_pcapng.a 00:01:37.501 [351/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:37.501 [352/745] Linking static target lib/librte_lpm.a 00:01:37.501 [353/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:37.501 [354/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:37.501 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:37.501 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:37.501 [357/745] Linking static target lib/librte_efd.a 00:01:37.501 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:37.501 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:37.501 [360/745] Linking static target lib/librte_reorder.a 00:01:37.763 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:37.763 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:37.763 [363/745] Generating lib/rte_port_def with a custom command 00:01:37.763 [364/745] Generating lib/rte_port_mingw with a custom command 00:01:37.763 [365/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:37.763 [366/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:37.763 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.763 [368/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:37.763 [369/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:37.763 [370/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:37.763 [371/745] Generating lib/rte_pdump_def with a custom command 00:01:37.763 [372/745] Generating lib/rte_pdump_mingw with a custom command 00:01:37.763 [373/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.026 [374/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:38.026 [375/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:38.026 [376/745] Linking static target lib/acl/libavx2_tmp.a 00:01:38.026 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:38.026 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:38.026 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:38.026 [380/745] Linking static target lib/librte_security.a 00:01:38.026 [381/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:38.026 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.026 [383/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.026 [384/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:38.026 [385/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.026 [386/745] Linking static target lib/librte_power.a 00:01:38.285 [387/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:38.285 [388/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:38.285 [389/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:38.285 [390/745] Linking static target lib/librte_hash.a 00:01:38.285 [391/745] Linking static target lib/acl/libavx512_tmp.a 00:01:38.285 [392/745] Linking static target lib/librte_acl.a 00:01:38.285 [393/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:38.285 [394/745] Linking static target lib/librte_rib.a 00:01:38.552 [395/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:38.552 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:38.552 [397/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:38.552 [398/745] Generating lib/rte_table_def with a custom command 00:01:38.552 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:38.552 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.816 [401/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.816 [402/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:38.816 [403/745] Linking static target lib/librte_ethdev.a 00:01:38.816 [404/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:38.816 [405/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.077 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:39.077 [407/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:39.077 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:39.077 [409/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:39.077 [410/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.077 [411/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:39.077 [412/745] Linking static target lib/librte_mbuf.a 00:01:39.077 [413/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.077 [414/745] Generating lib/rte_pipeline_def with a custom command 00:01:39.077 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:39.077 [416/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:39.338 [417/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.338 [418/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:39.338 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:39.338 [420/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:39.338 [421/745] Linking static target lib/librte_fib.a 00:01:39.338 [422/745] Generating lib/rte_graph_def with a custom command 00:01:39.338 [423/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:39.338 [424/745] Generating lib/rte_graph_mingw with a custom command 00:01:39.602 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:39.602 [426/745] Linking static target lib/librte_member.a 00:01:39.602 [427/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.602 [428/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:39.602 [429/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:39.602 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:39.602 [431/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:39.602 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:39.602 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:39.602 [434/745] Linking static target lib/librte_eventdev.a 00:01:39.602 [435/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:39.602 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:39.602 [437/745] Generating lib/rte_node_def with a custom command 00:01:39.602 [438/745] Generating lib/rte_node_mingw with a custom command 00:01:39.862 [439/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:39.862 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.862 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:39.862 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:39.862 [443/745] Linking static target lib/librte_sched.a 00:01:39.862 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.862 [445/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:39.862 [446/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:39.862 [447/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:39.862 [448/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.125 [449/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:40.125 [450/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:40.125 [451/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:40.126 [452/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:40.126 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:40.126 [454/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.126 [455/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:40.126 [456/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:40.126 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:40.126 [458/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:40.126 [459/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:40.394 [460/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.394 [461/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:40.394 [462/745] Linking static target lib/librte_cryptodev.a 00:01:40.394 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:40.394 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:40.394 [465/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:40.394 [466/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:40.394 [467/745] Linking static target lib/librte_pdump.a 00:01:40.394 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:40.394 [469/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:40.394 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:40.394 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:40.394 [472/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:40.394 [473/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:40.656 [474/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.656 [475/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:40.656 [476/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:40.656 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:40.656 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:40.656 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:40.920 [480/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:40.920 [481/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:40.920 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:40.920 [483/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:40.920 [484/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.920 [485/745] Linking static target drivers/librte_bus_vdev.a 00:01:40.920 [486/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.920 [487/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:40.920 [488/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:40.920 [489/745] Linking static target lib/librte_table.a 00:01:40.920 [490/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:41.182 [491/745] Linking static target lib/librte_ipsec.a 00:01:41.183 [492/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:41.183 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:41.183 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:41.183 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.446 [496/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:41.446 [497/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:41.446 [498/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:41.446 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:41.446 [500/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:41.446 [501/745] Linking static target lib/librte_graph.a 00:01:41.712 [502/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:41.712 [503/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:41.712 [504/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.712 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:41.712 [506/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:41.712 [507/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:41.712 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.712 [509/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:41.712 [510/745] Linking static target drivers/librte_bus_pci.a 00:01:41.712 [511/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:41.974 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:41.974 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:41.974 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.239 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:42.239 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.500 [517/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:42.500 [518/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:42.500 [519/745] Linking static target lib/librte_port.a 00:01:42.500 [520/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.500 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:42.500 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:42.764 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:42.764 [524/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:42.764 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:42.764 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:43.032 [527/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.032 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:43.032 [529/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:43.032 [530/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.032 [531/745] Linking static target drivers/librte_mempool_ring.a 00:01:43.032 [532/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:43.032 [533/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:43.032 [534/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:43.294 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:43.294 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:43.294 [537/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.294 [538/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:43.294 [539/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:43.556 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:43.556 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:43.820 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:43.820 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:43.820 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:43.820 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:44.083 [546/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:44.083 [547/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:44.083 [548/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:44.083 [549/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:44.083 [550/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:44.083 [551/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:44.347 [552/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:44.347 [553/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:44.615 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:44.876 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:44.876 [556/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:44.876 [557/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:44.876 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:45.143 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:45.143 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:45.143 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:45.404 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:45.404 [563/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:45.404 [564/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:45.404 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:45.404 [566/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:45.404 [567/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:45.404 [568/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:45.404 [569/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:45.404 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:45.666 [571/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.666 [572/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:45.666 [573/745] Linking target lib/librte_eal.so.23.0 00:01:45.666 [574/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:45.666 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:45.927 [576/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:45.927 [577/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:45.927 [578/745] Linking target lib/librte_ring.so.23.0 00:01:46.190 [579/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:46.190 [580/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:46.190 [581/745] Linking target lib/librte_pci.so.23.0 00:01:46.190 [582/745] Linking target lib/librte_meter.so.23.0 00:01:46.190 [583/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:46.190 [584/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:46.190 [585/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.190 [586/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:46.190 [587/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:46.190 [588/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:46.190 [589/745] Linking target lib/librte_timer.so.23.0 00:01:46.190 [590/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:46.190 [591/745] Linking target lib/librte_cfgfile.so.23.0 00:01:46.190 [592/745] Linking target lib/librte_acl.so.23.0 00:01:46.190 [593/745] Linking target lib/librte_jobstats.so.23.0 00:01:46.190 [594/745] Linking target lib/librte_rcu.so.23.0 00:01:46.190 [595/745] Linking target lib/librte_mempool.so.23.0 00:01:46.190 [596/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:46.190 [597/745] Linking target lib/librte_rawdev.so.23.0 00:01:46.190 [598/745] Linking target lib/librte_dmadev.so.23.0 00:01:46.190 [599/745] Linking target lib/librte_stack.so.23.0 00:01:46.453 [600/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:46.453 [601/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:46.453 [602/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:46.453 [603/745] Linking target lib/librte_graph.so.23.0 00:01:46.453 [604/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:46.453 [605/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:46.453 [606/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:46.453 [607/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:46.453 [608/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:46.453 [609/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:46.453 [610/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:46.453 [611/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:46.453 [612/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:46.722 [613/745] Linking target lib/librte_mbuf.so.23.0 00:01:46.722 [614/745] Linking target lib/librte_rib.so.23.0 00:01:46.722 [615/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:46.722 [616/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:46.722 [617/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:46.722 [618/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:46.722 [619/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:46.722 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:46.722 [621/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:46.983 [622/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:46.983 [623/745] Linking target lib/librte_fib.so.23.0 00:01:46.983 [624/745] Linking target lib/librte_net.so.23.0 00:01:46.983 [625/745] Linking target lib/librte_bbdev.so.23.0 00:01:46.983 [626/745] Linking target lib/librte_compressdev.so.23.0 00:01:46.983 [627/745] Linking target lib/librte_distributor.so.23.0 00:01:46.983 [628/745] Linking target lib/librte_cryptodev.so.23.0 00:01:46.983 [629/745] Linking target lib/librte_gpudev.so.23.0 00:01:47.242 [630/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:47.242 [631/745] Linking target lib/librte_regexdev.so.23.0 00:01:47.242 [632/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:47.242 [633/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:47.242 [634/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:47.242 [635/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:47.242 [636/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:47.242 [637/745] Linking target lib/librte_ethdev.so.23.0 00:01:47.242 [638/745] Linking target lib/librte_cmdline.so.23.0 00:01:47.242 [639/745] Linking target lib/librte_hash.so.23.0 00:01:47.242 [640/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:47.242 [641/745] Linking target lib/librte_reorder.so.23.0 00:01:47.242 [642/745] Linking target lib/librte_sched.so.23.0 00:01:47.242 [643/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:47.242 [644/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:47.242 [645/745] Linking target lib/librte_security.so.23.0 00:01:47.501 [646/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:47.501 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:47.501 [648/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:47.501 [649/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:47.501 [650/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:47.501 [651/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:47.501 [652/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:47.501 [653/745] Linking target lib/librte_lpm.so.23.0 00:01:47.501 [654/745] Linking target lib/librte_efd.so.23.0 00:01:47.501 [655/745] Linking target lib/librte_member.so.23.0 00:01:47.501 [656/745] Linking target lib/librte_pcapng.so.23.0 00:01:47.501 [657/745] Linking target lib/librte_metrics.so.23.0 00:01:47.501 [658/745] Linking target lib/librte_gro.so.23.0 00:01:47.501 [659/745] Linking target lib/librte_gso.so.23.0 00:01:47.501 [660/745] Linking target lib/librte_ip_frag.so.23.0 00:01:47.501 [661/745] Linking target lib/librte_bpf.so.23.0 00:01:47.501 [662/745] Linking target lib/librte_power.so.23.0 00:01:47.501 [663/745] Linking target lib/librte_ipsec.so.23.0 00:01:47.501 [664/745] Linking target lib/librte_eventdev.so.23.0 00:01:47.501 [665/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:47.760 [666/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:47.760 [667/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:47.760 [668/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:47.760 [669/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:47.760 [670/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:47.760 [671/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:47.760 [672/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:47.760 [673/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:47.760 [674/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:47.760 [675/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:47.760 [676/745] Linking target lib/librte_latencystats.so.23.0 00:01:47.760 [677/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:47.760 [678/745] Linking target lib/librte_bitratestats.so.23.0 00:01:47.760 [679/745] Linking target lib/librte_pdump.so.23.0 00:01:47.760 [680/745] Linking target lib/librte_port.so.23.0 00:01:47.760 [681/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:48.018 [682/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:48.018 [683/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:48.018 [684/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:48.018 [685/745] Linking target lib/librte_table.so.23.0 00:01:48.018 [686/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:48.018 [687/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:48.276 [688/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:48.276 [689/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:48.276 [690/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:48.276 [691/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:48.276 [692/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:48.534 [693/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:48.795 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:48.795 [695/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:48.795 [696/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:48.795 [697/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:49.085 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:49.342 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:49.342 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:49.342 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:49.342 [702/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:49.600 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:49.857 [704/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:49.857 [705/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:49.857 [706/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:49.857 [707/745] Linking static target drivers/librte_net_i40e.a 00:01:50.116 [708/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:50.116 [709/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:50.373 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.373 [711/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:50.373 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:51.307 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:51.307 [714/745] Linking static target lib/librte_node.a 00:01:51.307 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.307 [716/745] Linking target lib/librte_node.so.23.0 00:01:51.872 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:52.129 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:53.061 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:01.217 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.297 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:33.297 [722/745] Linking static target lib/librte_vhost.a 00:02:33.297 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.297 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:43.333 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:43.333 [726/745] Linking static target lib/librte_pipeline.a 00:02:43.333 [727/745] Linking target app/dpdk-dumpcap 00:02:43.333 [728/745] Linking target app/dpdk-test-cmdline 00:02:43.333 [729/745] Linking target app/dpdk-test-sad 00:02:43.333 [730/745] Linking target app/dpdk-test-acl 00:02:43.333 [731/745] Linking target app/dpdk-test-fib 00:02:43.333 [732/745] Linking target app/dpdk-pdump 00:02:43.333 [733/745] Linking target app/dpdk-test-security-perf 00:02:43.333 [734/745] Linking target app/dpdk-test-flow-perf 00:02:43.333 [735/745] Linking target app/dpdk-proc-info 00:02:43.333 [736/745] Linking target app/dpdk-test-regex 00:02:43.333 [737/745] Linking target app/dpdk-test-pipeline 00:02:43.333 [738/745] Linking target app/dpdk-test-gpudev 00:02:43.333 [739/745] Linking target app/dpdk-test-eventdev 00:02:43.333 [740/745] Linking target app/dpdk-test-bbdev 00:02:43.333 [741/745] Linking target app/dpdk-test-compress-perf 00:02:43.592 [742/745] Linking target app/dpdk-test-crypto-perf 00:02:43.592 [743/745] Linking target app/dpdk-testpmd 00:02:45.495 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.495 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:45.495 02:50:40 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:45.495 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:45.495 [0/1] Installing files. 00:02:45.759 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:45.759 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.760 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.761 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.762 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.763 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.764 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:45.764 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.764 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.765 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:46.336 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:46.336 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:46.336 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.336 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:46.336 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.336 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.337 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.338 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:46.339 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:46.339 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:46.339 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:46.339 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:46.339 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:46.339 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:46.339 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:46.339 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:46.339 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:46.339 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:46.339 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:46.339 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:46.339 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:46.339 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:46.339 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:46.339 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:46.339 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:46.339 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:46.339 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:46.339 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:46.339 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:46.339 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:46.339 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:46.339 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:46.339 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:46.340 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:46.340 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:46.340 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:46.340 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:46.340 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:46.340 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:46.340 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:46.340 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:46.340 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:46.340 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:46.340 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:46.340 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:46.340 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:46.340 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:46.340 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:46.340 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:46.340 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:46.340 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:46.340 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:46.340 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:46.340 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:46.340 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:46.340 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:46.340 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:46.340 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:46.340 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:46.340 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:46.340 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:46.340 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:46.340 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:46.340 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:46.340 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:46.340 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:46.340 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:46.340 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:46.340 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:46.340 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:46.340 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:46.340 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:46.340 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:46.340 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:46.340 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:46.340 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:46.340 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:46.340 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:46.340 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:46.340 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:46.340 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:46.340 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:46.340 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:46.340 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:46.340 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:46.340 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:46.340 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:46.340 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:46.340 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:46.340 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:46.340 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:46.340 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:46.340 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:46.340 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:46.340 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:46.340 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:46.340 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:46.340 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:46.340 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:46.340 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:46.340 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:46.340 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:46.340 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:46.340 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:46.340 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:46.340 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:46.340 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:46.340 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:46.340 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:46.340 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:46.340 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:46.340 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:46.341 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:46.341 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:46.341 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:46.341 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:46.341 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:46.341 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:46.341 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:46.341 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:46.341 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:46.341 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:46.341 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:46.341 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:46.341 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:46.341 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:46.341 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:46.341 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:46.341 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:46.341 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:46.341 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:46.341 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:46.341 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:46.341 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:46.341 02:50:41 -- common/autobuild_common.sh@189 -- $ uname -s 00:02:46.341 02:50:41 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:46.341 02:50:41 -- common/autobuild_common.sh@200 -- $ cat 00:02:46.341 02:50:41 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:46.341 00:02:46.341 real 1m21.050s 00:02:46.341 user 14m19.574s 00:02:46.341 sys 1m48.401s 00:02:46.341 02:50:41 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:46.341 02:50:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:46.341 ************************************ 00:02:46.341 END TEST build_native_dpdk 00:02:46.341 ************************************ 00:02:46.341 02:50:41 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:46.341 02:50:41 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:46.341 02:50:41 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:46.341 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:46.599 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:46.599 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:46.599 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:46.857 Using 'verbs' RDMA provider 00:02:57.399 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:05.508 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:05.508 Creating mk/config.mk...done. 00:03:05.509 Creating mk/cc.flags.mk...done. 00:03:05.509 Type 'make' to build. 00:03:05.509 02:51:00 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:05.509 02:51:00 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:05.509 02:51:00 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:05.509 02:51:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:05.509 ************************************ 00:03:05.509 START TEST make 00:03:05.509 ************************************ 00:03:05.509 02:51:00 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:05.765 make[1]: Nothing to be done for 'all'. 00:03:07.157 The Meson build system 00:03:07.157 Version: 1.3.1 00:03:07.157 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:07.157 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:07.157 Build type: native build 00:03:07.157 Project name: libvfio-user 00:03:07.157 Project version: 0.0.1 00:03:07.157 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:07.157 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:07.157 Host machine cpu family: x86_64 00:03:07.157 Host machine cpu: x86_64 00:03:07.157 Run-time dependency threads found: YES 00:03:07.157 Library dl found: YES 00:03:07.157 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:07.157 Run-time dependency json-c found: YES 0.17 00:03:07.157 Run-time dependency cmocka found: YES 1.1.7 00:03:07.157 Program pytest-3 found: NO 00:03:07.157 Program flake8 found: NO 00:03:07.157 Program misspell-fixer found: NO 00:03:07.157 Program restructuredtext-lint found: NO 00:03:07.157 Program valgrind found: YES (/usr/bin/valgrind) 00:03:07.157 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:07.157 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:07.157 Compiler for C supports arguments -Wwrite-strings: YES 00:03:07.157 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.157 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:07.157 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:07.158 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:07.158 Build targets in project: 8 00:03:07.158 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:07.158 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:07.158 00:03:07.158 libvfio-user 0.0.1 00:03:07.158 00:03:07.158 User defined options 00:03:07.158 buildtype : debug 00:03:07.158 default_library: shared 00:03:07.158 libdir : /usr/local/lib 00:03:07.158 00:03:07.158 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:08.159 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:08.159 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:08.159 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:08.159 [3/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:08.159 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:08.159 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:08.159 [6/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:08.159 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:08.159 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:08.159 [9/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:08.159 [10/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:08.421 [11/37] Compiling C object samples/null.p/null.c.o 00:03:08.421 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:08.421 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:08.421 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:08.421 [15/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:08.421 [16/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:08.421 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:08.421 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:08.421 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:08.421 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:08.421 [21/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:08.421 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:08.421 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:08.421 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:08.421 [25/37] Compiling C object samples/server.p/server.c.o 00:03:08.421 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:08.421 [27/37] Compiling C object samples/client.p/client.c.o 00:03:08.421 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:03:08.683 [29/37] Linking target samples/client 00:03:08.683 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:08.683 [31/37] Linking target test/unit_tests 00:03:08.683 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:08.683 [33/37] Linking target samples/server 00:03:08.683 [34/37] Linking target samples/lspci 00:03:08.683 [35/37] Linking target samples/gpio-pci-idio-16 00:03:08.683 [36/37] Linking target samples/null 00:03:08.943 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:08.943 INFO: autodetecting backend as ninja 00:03:08.943 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:08.943 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:09.520 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:09.520 ninja: no work to do. 00:03:21.723 CC lib/log/log.o 00:03:21.723 CC lib/log/log_flags.o 00:03:21.723 CC lib/log/log_deprecated.o 00:03:21.723 CC lib/ut_mock/mock.o 00:03:21.723 CC lib/ut/ut.o 00:03:21.723 LIB libspdk_ut_mock.a 00:03:21.723 SO libspdk_ut_mock.so.5.0 00:03:21.723 LIB libspdk_ut.a 00:03:21.723 LIB libspdk_log.a 00:03:21.723 SO libspdk_ut.so.1.0 00:03:21.723 SO libspdk_log.so.6.1 00:03:21.723 SYMLINK libspdk_ut_mock.so 00:03:21.723 SYMLINK libspdk_ut.so 00:03:21.723 SYMLINK libspdk_log.so 00:03:21.723 CC lib/dma/dma.o 00:03:21.723 CC lib/ioat/ioat.o 00:03:21.723 CC lib/util/base64.o 00:03:21.723 CXX lib/trace_parser/trace.o 00:03:21.723 CC lib/util/bit_array.o 00:03:21.723 CC lib/util/cpuset.o 00:03:21.723 CC lib/util/crc16.o 00:03:21.723 CC lib/util/crc32.o 00:03:21.723 CC lib/util/crc32c.o 00:03:21.723 CC lib/util/crc32_ieee.o 00:03:21.723 CC lib/util/crc64.o 00:03:21.723 CC lib/util/dif.o 00:03:21.723 CC lib/util/fd.o 00:03:21.723 CC lib/util/file.o 00:03:21.723 CC lib/util/hexlify.o 00:03:21.723 CC lib/util/iov.o 00:03:21.723 CC lib/util/math.o 00:03:21.723 CC lib/util/pipe.o 00:03:21.723 CC lib/util/strerror_tls.o 00:03:21.723 CC lib/util/string.o 00:03:21.723 CC lib/util/uuid.o 00:03:21.723 CC lib/util/fd_group.o 00:03:21.723 CC lib/util/xor.o 00:03:21.723 CC lib/util/zipf.o 00:03:21.723 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.723 CC lib/vfio_user/host/vfio_user.o 00:03:21.723 LIB libspdk_dma.a 00:03:21.723 SO libspdk_dma.so.3.0 00:03:21.723 SYMLINK libspdk_dma.so 00:03:21.723 LIB libspdk_ioat.a 00:03:21.723 SO libspdk_ioat.so.6.0 00:03:21.723 SYMLINK libspdk_ioat.so 00:03:21.723 LIB libspdk_vfio_user.a 00:03:21.723 SO libspdk_vfio_user.so.4.0 00:03:21.723 SYMLINK libspdk_vfio_user.so 00:03:21.723 LIB libspdk_util.a 00:03:21.723 SO libspdk_util.so.8.0 00:03:21.982 SYMLINK libspdk_util.so 00:03:21.982 CC lib/json/json_parse.o 00:03:21.982 CC lib/vmd/vmd.o 00:03:21.982 CC lib/rdma/common.o 00:03:21.982 CC lib/rdma/rdma_verbs.o 00:03:21.982 CC lib/json/json_util.o 00:03:21.982 CC lib/conf/conf.o 00:03:21.982 CC lib/env_dpdk/env.o 00:03:21.982 CC lib/vmd/led.o 00:03:21.982 CC lib/json/json_write.o 00:03:21.982 CC lib/env_dpdk/memory.o 00:03:21.982 CC lib/env_dpdk/pci.o 00:03:21.982 CC lib/idxd/idxd.o 00:03:21.982 CC lib/env_dpdk/init.o 00:03:21.982 CC lib/idxd/idxd_user.o 00:03:21.982 CC lib/env_dpdk/threads.o 00:03:21.982 CC lib/idxd/idxd_kernel.o 00:03:21.982 CC lib/env_dpdk/pci_ioat.o 00:03:21.982 CC lib/env_dpdk/pci_virtio.o 00:03:21.982 CC lib/env_dpdk/pci_vmd.o 00:03:21.982 CC lib/env_dpdk/pci_idxd.o 00:03:21.982 CC lib/env_dpdk/pci_event.o 00:03:21.982 CC lib/env_dpdk/sigbus_handler.o 00:03:21.982 CC lib/env_dpdk/pci_dpdk.o 00:03:21.982 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.982 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:22.240 LIB libspdk_trace_parser.a 00:03:22.240 SO libspdk_trace_parser.so.4.0 00:03:22.240 LIB libspdk_conf.a 00:03:22.240 SO libspdk_conf.so.5.0 00:03:22.240 SYMLINK libspdk_trace_parser.so 00:03:22.497 LIB libspdk_rdma.a 00:03:22.497 SYMLINK libspdk_conf.so 00:03:22.497 SO libspdk_rdma.so.5.0 00:03:22.497 LIB libspdk_json.a 00:03:22.497 SYMLINK libspdk_rdma.so 00:03:22.497 SO libspdk_json.so.5.1 00:03:22.497 SYMLINK libspdk_json.so 00:03:22.497 LIB libspdk_idxd.a 00:03:22.754 SO libspdk_idxd.so.11.0 00:03:22.754 CC lib/jsonrpc/jsonrpc_server.o 00:03:22.754 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:22.754 CC lib/jsonrpc/jsonrpc_client.o 00:03:22.754 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:22.754 SYMLINK libspdk_idxd.so 00:03:22.754 LIB libspdk_vmd.a 00:03:22.754 SO libspdk_vmd.so.5.0 00:03:22.754 SYMLINK libspdk_vmd.so 00:03:23.012 LIB libspdk_jsonrpc.a 00:03:23.012 SO libspdk_jsonrpc.so.5.1 00:03:23.012 SYMLINK libspdk_jsonrpc.so 00:03:23.012 CC lib/rpc/rpc.o 00:03:23.269 LIB libspdk_rpc.a 00:03:23.269 SO libspdk_rpc.so.5.0 00:03:23.269 SYMLINK libspdk_rpc.so 00:03:23.526 CC lib/sock/sock.o 00:03:23.526 CC lib/notify/notify.o 00:03:23.526 CC lib/sock/sock_rpc.o 00:03:23.526 CC lib/trace/trace.o 00:03:23.526 CC lib/notify/notify_rpc.o 00:03:23.526 CC lib/trace/trace_flags.o 00:03:23.526 CC lib/trace/trace_rpc.o 00:03:23.784 LIB libspdk_notify.a 00:03:23.784 SO libspdk_notify.so.5.0 00:03:23.784 LIB libspdk_trace.a 00:03:23.784 SYMLINK libspdk_notify.so 00:03:23.784 SO libspdk_trace.so.9.0 00:03:23.784 SYMLINK libspdk_trace.so 00:03:23.784 LIB libspdk_sock.a 00:03:23.784 SO libspdk_sock.so.8.0 00:03:24.043 CC lib/thread/thread.o 00:03:24.043 CC lib/thread/iobuf.o 00:03:24.043 SYMLINK libspdk_sock.so 00:03:24.043 LIB libspdk_env_dpdk.a 00:03:24.043 SO libspdk_env_dpdk.so.13.0 00:03:24.043 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.043 CC lib/nvme/nvme_ctrlr.o 00:03:24.043 CC lib/nvme/nvme_fabric.o 00:03:24.043 CC lib/nvme/nvme_ns_cmd.o 00:03:24.043 CC lib/nvme/nvme_ns.o 00:03:24.043 CC lib/nvme/nvme_pcie_common.o 00:03:24.043 CC lib/nvme/nvme_pcie.o 00:03:24.043 CC lib/nvme/nvme_qpair.o 00:03:24.043 CC lib/nvme/nvme.o 00:03:24.043 CC lib/nvme/nvme_quirks.o 00:03:24.043 CC lib/nvme/nvme_transport.o 00:03:24.043 CC lib/nvme/nvme_discovery.o 00:03:24.043 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.043 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.043 CC lib/nvme/nvme_tcp.o 00:03:24.043 CC lib/nvme/nvme_opal.o 00:03:24.043 CC lib/nvme/nvme_io_msg.o 00:03:24.043 CC lib/nvme/nvme_poll_group.o 00:03:24.043 CC lib/nvme/nvme_zns.o 00:03:24.043 CC lib/nvme/nvme_cuse.o 00:03:24.043 CC lib/nvme/nvme_vfio_user.o 00:03:24.043 CC lib/nvme/nvme_rdma.o 00:03:24.301 SYMLINK libspdk_env_dpdk.so 00:03:25.674 LIB libspdk_thread.a 00:03:25.674 SO libspdk_thread.so.9.0 00:03:25.674 SYMLINK libspdk_thread.so 00:03:25.674 CC lib/blob/blobstore.o 00:03:25.674 CC lib/virtio/virtio.o 00:03:25.674 CC lib/accel/accel.o 00:03:25.674 CC lib/vfu_tgt/tgt_endpoint.o 00:03:25.674 CC lib/blob/request.o 00:03:25.674 CC lib/accel/accel_rpc.o 00:03:25.674 CC lib/blob/zeroes.o 00:03:25.674 CC lib/virtio/virtio_vhost_user.o 00:03:25.674 CC lib/init/json_config.o 00:03:25.674 CC lib/vfu_tgt/tgt_rpc.o 00:03:25.674 CC lib/accel/accel_sw.o 00:03:25.674 CC lib/blob/blob_bs_dev.o 00:03:25.674 CC lib/virtio/virtio_vfio_user.o 00:03:25.674 CC lib/init/subsystem.o 00:03:25.674 CC lib/virtio/virtio_pci.o 00:03:25.674 CC lib/init/subsystem_rpc.o 00:03:25.674 CC lib/init/rpc.o 00:03:25.932 LIB libspdk_init.a 00:03:25.932 SO libspdk_init.so.4.0 00:03:25.932 LIB libspdk_virtio.a 00:03:25.932 LIB libspdk_vfu_tgt.a 00:03:26.190 SYMLINK libspdk_init.so 00:03:26.190 SO libspdk_vfu_tgt.so.2.0 00:03:26.190 SO libspdk_virtio.so.6.0 00:03:26.190 SYMLINK libspdk_vfu_tgt.so 00:03:26.190 SYMLINK libspdk_virtio.so 00:03:26.190 CC lib/event/app.o 00:03:26.190 CC lib/event/reactor.o 00:03:26.190 CC lib/event/log_rpc.o 00:03:26.190 CC lib/event/app_rpc.o 00:03:26.190 CC lib/event/scheduler_static.o 00:03:26.448 LIB libspdk_nvme.a 00:03:26.448 SO libspdk_nvme.so.12.0 00:03:26.448 LIB libspdk_event.a 00:03:26.705 SO libspdk_event.so.12.0 00:03:26.705 SYMLINK libspdk_event.so 00:03:26.705 SYMLINK libspdk_nvme.so 00:03:26.705 LIB libspdk_accel.a 00:03:26.705 SO libspdk_accel.so.14.0 00:03:26.963 SYMLINK libspdk_accel.so 00:03:26.963 CC lib/bdev/bdev.o 00:03:26.963 CC lib/bdev/bdev_rpc.o 00:03:26.963 CC lib/bdev/bdev_zone.o 00:03:26.963 CC lib/bdev/part.o 00:03:26.963 CC lib/bdev/scsi_nvme.o 00:03:28.861 LIB libspdk_blob.a 00:03:28.861 SO libspdk_blob.so.10.1 00:03:28.861 SYMLINK libspdk_blob.so 00:03:28.861 CC lib/blobfs/blobfs.o 00:03:28.861 CC lib/blobfs/tree.o 00:03:28.861 CC lib/lvol/lvol.o 00:03:29.427 LIB libspdk_bdev.a 00:03:29.427 LIB libspdk_blobfs.a 00:03:29.427 SO libspdk_bdev.so.14.0 00:03:29.427 SO libspdk_blobfs.so.9.0 00:03:29.689 LIB libspdk_lvol.a 00:03:29.689 SO libspdk_lvol.so.9.1 00:03:29.689 SYMLINK libspdk_blobfs.so 00:03:29.689 SYMLINK libspdk_bdev.so 00:03:29.689 SYMLINK libspdk_lvol.so 00:03:29.689 CC lib/nbd/nbd.o 00:03:29.689 CC lib/nbd/nbd_rpc.o 00:03:29.689 CC lib/scsi/dev.o 00:03:29.689 CC lib/nvmf/ctrlr.o 00:03:29.689 CC lib/scsi/lun.o 00:03:29.689 CC lib/nvmf/ctrlr_discovery.o 00:03:29.689 CC lib/scsi/port.o 00:03:29.689 CC lib/ublk/ublk.o 00:03:29.689 CC lib/ftl/ftl_core.o 00:03:29.689 CC lib/nvmf/ctrlr_bdev.o 00:03:29.689 CC lib/scsi/scsi.o 00:03:29.689 CC lib/ftl/ftl_init.o 00:03:29.689 CC lib/nvmf/subsystem.o 00:03:29.689 CC lib/ublk/ublk_rpc.o 00:03:29.689 CC lib/scsi/scsi_bdev.o 00:03:29.689 CC lib/nvmf/nvmf.o 00:03:29.689 CC lib/ftl/ftl_layout.o 00:03:29.689 CC lib/scsi/scsi_pr.o 00:03:29.689 CC lib/nvmf/nvmf_rpc.o 00:03:29.689 CC lib/ftl/ftl_debug.o 00:03:29.689 CC lib/ftl/ftl_io.o 00:03:29.689 CC lib/nvmf/transport.o 00:03:29.689 CC lib/scsi/scsi_rpc.o 00:03:29.689 CC lib/scsi/task.o 00:03:29.689 CC lib/nvmf/tcp.o 00:03:29.689 CC lib/ftl/ftl_sb.o 00:03:29.689 CC lib/nvmf/vfio_user.o 00:03:29.689 CC lib/nvmf/rdma.o 00:03:29.689 CC lib/ftl/ftl_l2p.o 00:03:29.689 CC lib/ftl/ftl_l2p_flat.o 00:03:29.689 CC lib/ftl/ftl_nv_cache.o 00:03:29.689 CC lib/ftl/ftl_band.o 00:03:29.689 CC lib/ftl/ftl_band_ops.o 00:03:29.689 CC lib/ftl/ftl_writer.o 00:03:29.689 CC lib/ftl/ftl_rq.o 00:03:29.689 CC lib/ftl/ftl_reloc.o 00:03:29.689 CC lib/ftl/ftl_l2p_cache.o 00:03:29.689 CC lib/ftl/ftl_p2l.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:29.689 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:29.952 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:29.952 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:29.952 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:29.952 CC lib/ftl/utils/ftl_conf.o 00:03:30.232 CC lib/ftl/utils/ftl_md.o 00:03:30.232 CC lib/ftl/utils/ftl_mempool.o 00:03:30.232 CC lib/ftl/utils/ftl_bitmap.o 00:03:30.232 CC lib/ftl/utils/ftl_property.o 00:03:30.232 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:30.232 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:30.232 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:30.232 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:30.232 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:30.232 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:30.232 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:30.232 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:30.232 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:30.232 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:30.232 CC lib/ftl/base/ftl_base_dev.o 00:03:30.232 CC lib/ftl/base/ftl_base_bdev.o 00:03:30.232 CC lib/ftl/ftl_trace.o 00:03:30.532 LIB libspdk_nbd.a 00:03:30.532 SO libspdk_nbd.so.6.0 00:03:30.532 LIB libspdk_scsi.a 00:03:30.532 SYMLINK libspdk_nbd.so 00:03:30.532 SO libspdk_scsi.so.8.0 00:03:30.791 LIB libspdk_ublk.a 00:03:30.791 SYMLINK libspdk_scsi.so 00:03:30.791 SO libspdk_ublk.so.2.0 00:03:30.791 SYMLINK libspdk_ublk.so 00:03:30.791 CC lib/iscsi/conn.o 00:03:30.791 CC lib/vhost/vhost.o 00:03:30.791 CC lib/vhost/vhost_rpc.o 00:03:30.791 CC lib/iscsi/init_grp.o 00:03:30.791 CC lib/vhost/vhost_scsi.o 00:03:30.791 CC lib/iscsi/iscsi.o 00:03:30.791 CC lib/vhost/vhost_blk.o 00:03:30.791 CC lib/iscsi/md5.o 00:03:30.791 CC lib/vhost/rte_vhost_user.o 00:03:30.791 CC lib/iscsi/param.o 00:03:30.791 CC lib/iscsi/portal_grp.o 00:03:30.791 CC lib/iscsi/tgt_node.o 00:03:30.791 CC lib/iscsi/iscsi_subsystem.o 00:03:30.791 CC lib/iscsi/iscsi_rpc.o 00:03:30.791 CC lib/iscsi/task.o 00:03:31.049 LIB libspdk_ftl.a 00:03:31.308 SO libspdk_ftl.so.8.0 00:03:31.566 SYMLINK libspdk_ftl.so 00:03:32.133 LIB libspdk_vhost.a 00:03:32.133 SO libspdk_vhost.so.7.1 00:03:32.133 SYMLINK libspdk_vhost.so 00:03:32.133 LIB libspdk_nvmf.a 00:03:32.133 LIB libspdk_iscsi.a 00:03:32.391 SO libspdk_nvmf.so.17.0 00:03:32.391 SO libspdk_iscsi.so.7.0 00:03:32.391 SYMLINK libspdk_iscsi.so 00:03:32.391 SYMLINK libspdk_nvmf.so 00:03:32.649 CC module/vfu_device/vfu_virtio.o 00:03:32.649 CC module/vfu_device/vfu_virtio_blk.o 00:03:32.649 CC module/vfu_device/vfu_virtio_scsi.o 00:03:32.649 CC module/vfu_device/vfu_virtio_rpc.o 00:03:32.649 CC module/env_dpdk/env_dpdk_rpc.o 00:03:32.649 CC module/accel/dsa/accel_dsa.o 00:03:32.649 CC module/scheduler/gscheduler/gscheduler.o 00:03:32.649 CC module/accel/ioat/accel_ioat.o 00:03:32.649 CC module/sock/posix/posix.o 00:03:32.649 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:32.649 CC module/blob/bdev/blob_bdev.o 00:03:32.649 CC module/accel/ioat/accel_ioat_rpc.o 00:03:32.649 CC module/accel/error/accel_error.o 00:03:32.649 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:32.649 CC module/accel/dsa/accel_dsa_rpc.o 00:03:32.649 CC module/accel/iaa/accel_iaa.o 00:03:32.649 CC module/accel/error/accel_error_rpc.o 00:03:32.649 CC module/accel/iaa/accel_iaa_rpc.o 00:03:32.649 LIB libspdk_env_dpdk_rpc.a 00:03:32.908 SO libspdk_env_dpdk_rpc.so.5.0 00:03:32.908 LIB libspdk_scheduler_gscheduler.a 00:03:32.908 SYMLINK libspdk_env_dpdk_rpc.so 00:03:32.908 LIB libspdk_scheduler_dpdk_governor.a 00:03:32.908 SO libspdk_scheduler_gscheduler.so.3.0 00:03:32.908 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:32.908 LIB libspdk_accel_error.a 00:03:32.908 LIB libspdk_accel_ioat.a 00:03:32.908 LIB libspdk_scheduler_dynamic.a 00:03:32.908 LIB libspdk_accel_iaa.a 00:03:32.908 SO libspdk_accel_error.so.1.0 00:03:32.908 SO libspdk_accel_ioat.so.5.0 00:03:32.908 SO libspdk_scheduler_dynamic.so.3.0 00:03:32.908 SYMLINK libspdk_scheduler_gscheduler.so 00:03:32.908 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:32.908 SO libspdk_accel_iaa.so.2.0 00:03:32.908 LIB libspdk_accel_dsa.a 00:03:32.908 SYMLINK libspdk_accel_error.so 00:03:32.908 SYMLINK libspdk_scheduler_dynamic.so 00:03:32.908 LIB libspdk_blob_bdev.a 00:03:32.908 SO libspdk_accel_dsa.so.4.0 00:03:32.908 SYMLINK libspdk_accel_ioat.so 00:03:32.908 SYMLINK libspdk_accel_iaa.so 00:03:32.908 SO libspdk_blob_bdev.so.10.1 00:03:32.908 SYMLINK libspdk_accel_dsa.so 00:03:33.166 SYMLINK libspdk_blob_bdev.so 00:03:33.166 CC module/bdev/passthru/vbdev_passthru.o 00:03:33.166 CC module/bdev/lvol/vbdev_lvol.o 00:03:33.166 CC module/blobfs/bdev/blobfs_bdev.o 00:03:33.166 CC module/bdev/gpt/gpt.o 00:03:33.166 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:33.167 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:33.167 CC module/bdev/error/vbdev_error.o 00:03:33.167 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:33.167 CC module/bdev/gpt/vbdev_gpt.o 00:03:33.167 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:33.167 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:33.167 CC module/bdev/split/vbdev_split.o 00:03:33.167 CC module/bdev/error/vbdev_error_rpc.o 00:03:33.167 CC module/bdev/null/bdev_null.o 00:03:33.167 CC module/bdev/malloc/bdev_malloc.o 00:03:33.167 CC module/bdev/null/bdev_null_rpc.o 00:03:33.167 CC module/bdev/split/vbdev_split_rpc.o 00:03:33.167 CC module/bdev/raid/bdev_raid.o 00:03:33.167 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:33.167 CC module/bdev/delay/vbdev_delay.o 00:03:33.167 CC module/bdev/aio/bdev_aio.o 00:03:33.167 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:33.167 CC module/bdev/iscsi/bdev_iscsi.o 00:03:33.167 CC module/bdev/nvme/bdev_nvme.o 00:03:33.167 CC module/bdev/raid/bdev_raid_rpc.o 00:03:33.167 CC module/bdev/ftl/bdev_ftl.o 00:03:33.167 CC module/bdev/aio/bdev_aio_rpc.o 00:03:33.167 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:33.167 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:33.167 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:33.167 CC module/bdev/raid/bdev_raid_sb.o 00:03:33.167 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:33.167 CC module/bdev/nvme/nvme_rpc.o 00:03:33.167 CC module/bdev/raid/raid0.o 00:03:33.167 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:33.167 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:33.167 CC module/bdev/raid/raid1.o 00:03:33.167 CC module/bdev/nvme/bdev_mdns_client.o 00:03:33.167 CC module/bdev/raid/concat.o 00:03:33.167 CC module/bdev/nvme/vbdev_opal.o 00:03:33.167 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:33.167 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:33.427 LIB libspdk_vfu_device.a 00:03:33.427 SO libspdk_vfu_device.so.2.0 00:03:33.427 SYMLINK libspdk_vfu_device.so 00:03:33.688 LIB libspdk_bdev_error.a 00:03:33.688 LIB libspdk_sock_posix.a 00:03:33.688 LIB libspdk_blobfs_bdev.a 00:03:33.688 SO libspdk_bdev_error.so.5.0 00:03:33.688 SO libspdk_blobfs_bdev.so.5.0 00:03:33.688 SO libspdk_sock_posix.so.5.0 00:03:33.688 LIB libspdk_bdev_split.a 00:03:33.688 SYMLINK libspdk_bdev_error.so 00:03:33.688 LIB libspdk_bdev_ftl.a 00:03:33.688 LIB libspdk_bdev_null.a 00:03:33.688 LIB libspdk_bdev_gpt.a 00:03:33.688 LIB libspdk_bdev_iscsi.a 00:03:33.688 SO libspdk_bdev_split.so.5.0 00:03:33.688 SYMLINK libspdk_blobfs_bdev.so 00:03:33.688 SO libspdk_bdev_ftl.so.5.0 00:03:33.688 SO libspdk_bdev_null.so.5.0 00:03:33.688 SO libspdk_bdev_gpt.so.5.0 00:03:33.688 SYMLINK libspdk_sock_posix.so 00:03:33.688 SO libspdk_bdev_iscsi.so.5.0 00:03:33.688 LIB libspdk_bdev_passthru.a 00:03:33.688 LIB libspdk_bdev_aio.a 00:03:33.688 SYMLINK libspdk_bdev_split.so 00:03:33.688 SYMLINK libspdk_bdev_ftl.so 00:03:33.688 SO libspdk_bdev_passthru.so.5.0 00:03:33.688 SO libspdk_bdev_aio.so.5.0 00:03:33.688 SYMLINK libspdk_bdev_gpt.so 00:03:33.688 SYMLINK libspdk_bdev_null.so 00:03:33.688 LIB libspdk_bdev_zone_block.a 00:03:33.688 SYMLINK libspdk_bdev_iscsi.so 00:03:33.688 SO libspdk_bdev_zone_block.so.5.0 00:03:33.946 SYMLINK libspdk_bdev_aio.so 00:03:33.946 SYMLINK libspdk_bdev_passthru.so 00:03:33.946 LIB libspdk_bdev_malloc.a 00:03:33.946 SYMLINK libspdk_bdev_zone_block.so 00:03:33.946 LIB libspdk_bdev_delay.a 00:03:33.946 SO libspdk_bdev_malloc.so.5.0 00:03:33.946 SO libspdk_bdev_delay.so.5.0 00:03:33.946 LIB libspdk_bdev_lvol.a 00:03:33.946 LIB libspdk_bdev_virtio.a 00:03:33.946 SO libspdk_bdev_lvol.so.5.0 00:03:33.946 SYMLINK libspdk_bdev_malloc.so 00:03:33.946 SYMLINK libspdk_bdev_delay.so 00:03:33.946 SO libspdk_bdev_virtio.so.5.0 00:03:33.946 SYMLINK libspdk_bdev_lvol.so 00:03:33.946 SYMLINK libspdk_bdev_virtio.so 00:03:34.204 LIB libspdk_bdev_raid.a 00:03:34.462 SO libspdk_bdev_raid.so.5.0 00:03:34.462 SYMLINK libspdk_bdev_raid.so 00:03:35.399 LIB libspdk_bdev_nvme.a 00:03:35.399 SO libspdk_bdev_nvme.so.6.0 00:03:35.658 SYMLINK libspdk_bdev_nvme.so 00:03:35.917 CC module/event/subsystems/vmd/vmd.o 00:03:35.917 CC module/event/subsystems/iobuf/iobuf.o 00:03:35.917 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:35.917 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:35.917 CC module/event/subsystems/sock/sock.o 00:03:35.917 CC module/event/subsystems/scheduler/scheduler.o 00:03:35.917 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:35.917 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:35.917 LIB libspdk_event_sock.a 00:03:35.917 LIB libspdk_event_vhost_blk.a 00:03:35.917 LIB libspdk_event_vfu_tgt.a 00:03:35.917 LIB libspdk_event_vmd.a 00:03:35.917 LIB libspdk_event_scheduler.a 00:03:35.917 LIB libspdk_event_iobuf.a 00:03:35.917 SO libspdk_event_sock.so.4.0 00:03:35.917 SO libspdk_event_vhost_blk.so.2.0 00:03:35.917 SO libspdk_event_vfu_tgt.so.2.0 00:03:35.917 SO libspdk_event_vmd.so.5.0 00:03:35.917 SO libspdk_event_scheduler.so.3.0 00:03:35.917 SO libspdk_event_iobuf.so.2.0 00:03:35.917 SYMLINK libspdk_event_sock.so 00:03:36.175 SYMLINK libspdk_event_vhost_blk.so 00:03:36.175 SYMLINK libspdk_event_vfu_tgt.so 00:03:36.175 SYMLINK libspdk_event_scheduler.so 00:03:36.175 SYMLINK libspdk_event_vmd.so 00:03:36.175 SYMLINK libspdk_event_iobuf.so 00:03:36.175 CC module/event/subsystems/accel/accel.o 00:03:36.451 LIB libspdk_event_accel.a 00:03:36.451 SO libspdk_event_accel.so.5.0 00:03:36.451 SYMLINK libspdk_event_accel.so 00:03:36.451 CC module/event/subsystems/bdev/bdev.o 00:03:36.710 LIB libspdk_event_bdev.a 00:03:36.710 SO libspdk_event_bdev.so.5.0 00:03:36.710 SYMLINK libspdk_event_bdev.so 00:03:36.969 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:36.969 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:36.969 CC module/event/subsystems/scsi/scsi.o 00:03:36.969 CC module/event/subsystems/nbd/nbd.o 00:03:36.969 CC module/event/subsystems/ublk/ublk.o 00:03:36.969 LIB libspdk_event_nbd.a 00:03:36.969 LIB libspdk_event_ublk.a 00:03:36.969 LIB libspdk_event_scsi.a 00:03:36.969 SO libspdk_event_nbd.so.5.0 00:03:36.969 SO libspdk_event_ublk.so.2.0 00:03:36.969 SO libspdk_event_scsi.so.5.0 00:03:37.228 SYMLINK libspdk_event_ublk.so 00:03:37.228 SYMLINK libspdk_event_nbd.so 00:03:37.228 SYMLINK libspdk_event_scsi.so 00:03:37.228 LIB libspdk_event_nvmf.a 00:03:37.228 SO libspdk_event_nvmf.so.5.0 00:03:37.228 SYMLINK libspdk_event_nvmf.so 00:03:37.228 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:37.228 CC module/event/subsystems/iscsi/iscsi.o 00:03:37.487 LIB libspdk_event_vhost_scsi.a 00:03:37.487 SO libspdk_event_vhost_scsi.so.2.0 00:03:37.487 LIB libspdk_event_iscsi.a 00:03:37.487 SYMLINK libspdk_event_vhost_scsi.so 00:03:37.487 SO libspdk_event_iscsi.so.5.0 00:03:37.487 SYMLINK libspdk_event_iscsi.so 00:03:37.487 SO libspdk.so.5.0 00:03:37.487 SYMLINK libspdk.so 00:03:37.751 CXX app/trace/trace.o 00:03:37.752 CC app/trace_record/trace_record.o 00:03:37.752 CC app/spdk_nvme_perf/perf.o 00:03:37.752 CC app/spdk_nvme_discover/discovery_aer.o 00:03:37.752 CC app/spdk_top/spdk_top.o 00:03:37.752 CC app/spdk_lspci/spdk_lspci.o 00:03:37.752 TEST_HEADER include/spdk/accel.h 00:03:37.752 CC app/spdk_nvme_identify/identify.o 00:03:37.752 TEST_HEADER include/spdk/accel_module.h 00:03:37.752 TEST_HEADER include/spdk/assert.h 00:03:37.752 CC test/rpc_client/rpc_client_test.o 00:03:37.752 TEST_HEADER include/spdk/barrier.h 00:03:37.752 TEST_HEADER include/spdk/base64.h 00:03:37.752 TEST_HEADER include/spdk/bdev.h 00:03:37.752 TEST_HEADER include/spdk/bdev_module.h 00:03:37.752 TEST_HEADER include/spdk/bdev_zone.h 00:03:37.752 TEST_HEADER include/spdk/bit_array.h 00:03:37.752 TEST_HEADER include/spdk/bit_pool.h 00:03:37.752 TEST_HEADER include/spdk/blob_bdev.h 00:03:37.752 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:37.752 TEST_HEADER include/spdk/blobfs.h 00:03:37.752 TEST_HEADER include/spdk/blob.h 00:03:37.752 TEST_HEADER include/spdk/conf.h 00:03:37.752 TEST_HEADER include/spdk/config.h 00:03:37.752 TEST_HEADER include/spdk/cpuset.h 00:03:37.752 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:37.752 TEST_HEADER include/spdk/crc16.h 00:03:37.752 TEST_HEADER include/spdk/crc32.h 00:03:37.752 CC app/spdk_dd/spdk_dd.o 00:03:37.752 TEST_HEADER include/spdk/crc64.h 00:03:37.752 TEST_HEADER include/spdk/dif.h 00:03:37.752 CC app/nvmf_tgt/nvmf_main.o 00:03:37.752 TEST_HEADER include/spdk/dma.h 00:03:37.752 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.752 TEST_HEADER include/spdk/endian.h 00:03:37.752 CC examples/ioat/perf/perf.o 00:03:37.752 TEST_HEADER include/spdk/env_dpdk.h 00:03:37.752 TEST_HEADER include/spdk/env.h 00:03:37.752 CC app/vhost/vhost.o 00:03:37.752 CC examples/ioat/verify/verify.o 00:03:37.752 CC examples/util/zipf/zipf.o 00:03:37.752 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.752 TEST_HEADER include/spdk/event.h 00:03:37.752 TEST_HEADER include/spdk/fd_group.h 00:03:37.752 CC test/event/reactor/reactor.o 00:03:37.752 CC examples/nvme/arbitration/arbitration.o 00:03:37.752 CC examples/sock/hello_world/hello_sock.o 00:03:37.752 CC examples/idxd/perf/perf.o 00:03:37.752 TEST_HEADER include/spdk/fd.h 00:03:37.752 CC examples/accel/perf/accel_perf.o 00:03:37.752 CC examples/nvme/reconnect/reconnect.o 00:03:37.752 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.752 CC test/event/event_perf/event_perf.o 00:03:37.752 TEST_HEADER include/spdk/file.h 00:03:37.752 CC examples/nvme/hello_world/hello_world.o 00:03:37.752 TEST_HEADER include/spdk/ftl.h 00:03:37.752 TEST_HEADER include/spdk/gpt_spec.h 00:03:37.752 CC test/nvme/aer/aer.o 00:03:37.752 CC app/fio/nvme/fio_plugin.o 00:03:37.752 CC test/thread/poller_perf/poller_perf.o 00:03:37.752 CC test/app/jsoncat/jsoncat.o 00:03:37.752 CC test/app/histogram_perf/histogram_perf.o 00:03:37.752 CC examples/nvme/hotplug/hotplug.o 00:03:37.752 TEST_HEADER include/spdk/hexlify.h 00:03:37.752 TEST_HEADER include/spdk/histogram_data.h 00:03:37.752 TEST_HEADER include/spdk/idxd.h 00:03:37.752 TEST_HEADER include/spdk/idxd_spec.h 00:03:37.752 CC app/spdk_tgt/spdk_tgt.o 00:03:37.752 TEST_HEADER include/spdk/init.h 00:03:37.752 TEST_HEADER include/spdk/ioat.h 00:03:37.752 TEST_HEADER include/spdk/ioat_spec.h 00:03:38.018 TEST_HEADER include/spdk/iscsi_spec.h 00:03:38.018 TEST_HEADER include/spdk/json.h 00:03:38.018 TEST_HEADER include/spdk/jsonrpc.h 00:03:38.018 TEST_HEADER include/spdk/likely.h 00:03:38.018 TEST_HEADER include/spdk/log.h 00:03:38.018 TEST_HEADER include/spdk/lvol.h 00:03:38.018 CC examples/blob/hello_world/hello_blob.o 00:03:38.018 TEST_HEADER include/spdk/memory.h 00:03:38.018 CC examples/blob/cli/blobcli.o 00:03:38.018 CC examples/bdev/hello_world/hello_bdev.o 00:03:38.018 TEST_HEADER include/spdk/mmio.h 00:03:38.018 CC examples/thread/thread/thread_ex.o 00:03:38.018 CC test/blobfs/mkfs/mkfs.o 00:03:38.018 TEST_HEADER include/spdk/nbd.h 00:03:38.018 TEST_HEADER include/spdk/notify.h 00:03:38.018 TEST_HEADER include/spdk/nvme.h 00:03:38.018 TEST_HEADER include/spdk/nvme_intel.h 00:03:38.018 CC test/dma/test_dma/test_dma.o 00:03:38.018 CC test/bdev/bdevio/bdevio.o 00:03:38.018 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:38.018 CC test/app/bdev_svc/bdev_svc.o 00:03:38.018 CC examples/nvmf/nvmf/nvmf.o 00:03:38.018 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:38.018 CC test/accel/dif/dif.o 00:03:38.018 TEST_HEADER include/spdk/nvme_spec.h 00:03:38.018 TEST_HEADER include/spdk/nvme_zns.h 00:03:38.018 CC examples/bdev/bdevperf/bdevperf.o 00:03:38.018 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:38.018 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:38.018 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:38.018 CC test/lvol/esnap/esnap.o 00:03:38.018 TEST_HEADER include/spdk/nvmf.h 00:03:38.018 TEST_HEADER include/spdk/nvmf_spec.h 00:03:38.018 TEST_HEADER include/spdk/nvmf_transport.h 00:03:38.018 TEST_HEADER include/spdk/opal.h 00:03:38.018 CC test/env/mem_callbacks/mem_callbacks.o 00:03:38.018 TEST_HEADER include/spdk/opal_spec.h 00:03:38.018 TEST_HEADER include/spdk/pci_ids.h 00:03:38.018 TEST_HEADER include/spdk/pipe.h 00:03:38.018 TEST_HEADER include/spdk/queue.h 00:03:38.018 TEST_HEADER include/spdk/reduce.h 00:03:38.018 TEST_HEADER include/spdk/rpc.h 00:03:38.018 TEST_HEADER include/spdk/scheduler.h 00:03:38.018 TEST_HEADER include/spdk/scsi.h 00:03:38.018 TEST_HEADER include/spdk/scsi_spec.h 00:03:38.018 TEST_HEADER include/spdk/sock.h 00:03:38.018 TEST_HEADER include/spdk/stdinc.h 00:03:38.018 TEST_HEADER include/spdk/string.h 00:03:38.018 TEST_HEADER include/spdk/thread.h 00:03:38.018 TEST_HEADER include/spdk/trace.h 00:03:38.018 TEST_HEADER include/spdk/trace_parser.h 00:03:38.018 TEST_HEADER include/spdk/tree.h 00:03:38.018 LINK spdk_lspci 00:03:38.018 TEST_HEADER include/spdk/ublk.h 00:03:38.018 TEST_HEADER include/spdk/util.h 00:03:38.018 TEST_HEADER include/spdk/uuid.h 00:03:38.018 TEST_HEADER include/spdk/version.h 00:03:38.018 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:38.018 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:38.018 TEST_HEADER include/spdk/vhost.h 00:03:38.018 TEST_HEADER include/spdk/vmd.h 00:03:38.018 TEST_HEADER include/spdk/xor.h 00:03:38.018 TEST_HEADER include/spdk/zipf.h 00:03:38.018 CXX test/cpp_headers/accel.o 00:03:38.018 LINK lsvmd 00:03:38.018 LINK rpc_client_test 00:03:38.018 LINK reactor 00:03:38.018 LINK spdk_nvme_discover 00:03:38.285 LINK jsoncat 00:03:38.285 LINK event_perf 00:03:38.285 LINK zipf 00:03:38.285 LINK interrupt_tgt 00:03:38.285 LINK histogram_perf 00:03:38.285 LINK poller_perf 00:03:38.285 LINK nvmf_tgt 00:03:38.285 LINK vhost 00:03:38.285 LINK spdk_trace_record 00:03:38.285 LINK iscsi_tgt 00:03:38.285 LINK ioat_perf 00:03:38.285 LINK verify 00:03:38.285 LINK spdk_tgt 00:03:38.285 LINK bdev_svc 00:03:38.285 LINK mkfs 00:03:38.285 LINK hello_world 00:03:38.285 LINK hotplug 00:03:38.285 LINK hello_sock 00:03:38.285 LINK hello_blob 00:03:38.285 LINK mem_callbacks 00:03:38.285 LINK hello_bdev 00:03:38.285 LINK thread 00:03:38.285 LINK aer 00:03:38.546 CXX test/cpp_headers/accel_module.o 00:03:38.546 CC examples/vmd/led/led.o 00:03:38.546 LINK arbitration 00:03:38.546 CXX test/cpp_headers/assert.o 00:03:38.546 LINK idxd_perf 00:03:38.546 LINK reconnect 00:03:38.546 LINK nvmf 00:03:38.546 LINK spdk_dd 00:03:38.546 CC test/event/reactor_perf/reactor_perf.o 00:03:38.546 CC test/app/stub/stub.o 00:03:38.546 CXX test/cpp_headers/barrier.o 00:03:38.546 CC test/event/app_repeat/app_repeat.o 00:03:38.546 CC test/nvme/reset/reset.o 00:03:38.546 CC test/env/vtophys/vtophys.o 00:03:38.546 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.546 CC examples/nvme/abort/abort.o 00:03:38.546 LINK spdk_trace 00:03:38.546 CC test/nvme/sgl/sgl.o 00:03:38.546 LINK dif 00:03:38.546 CC app/fio/bdev/fio_plugin.o 00:03:38.546 LINK bdevio 00:03:38.546 CXX test/cpp_headers/base64.o 00:03:38.546 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.808 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.809 LINK test_dma 00:03:38.809 LINK accel_perf 00:03:38.809 CC test/nvme/e2edp/nvme_dp.o 00:03:38.809 CXX test/cpp_headers/bdev.o 00:03:38.809 CC test/env/memory/memory_ut.o 00:03:38.809 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.809 CC test/nvme/overhead/overhead.o 00:03:38.809 LINK nvme_fuzz 00:03:38.809 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:38.809 LINK nvme_manage 00:03:38.809 CXX test/cpp_headers/bdev_module.o 00:03:38.809 CC test/env/pci/pci_ut.o 00:03:38.809 LINK led 00:03:38.809 CXX test/cpp_headers/bdev_zone.o 00:03:38.809 CC test/nvme/err_injection/err_injection.o 00:03:38.809 CC test/event/scheduler/scheduler.o 00:03:38.809 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:38.809 LINK reactor_perf 00:03:38.809 LINK blobcli 00:03:38.809 LINK vtophys 00:03:38.809 LINK app_repeat 00:03:38.809 CC test/nvme/reserve/reserve.o 00:03:38.809 LINK spdk_nvme 00:03:38.809 CC test/nvme/startup/startup.o 00:03:38.809 CXX test/cpp_headers/bit_array.o 00:03:38.809 LINK stub 00:03:38.809 CC test/nvme/simple_copy/simple_copy.o 00:03:39.075 CXX test/cpp_headers/bit_pool.o 00:03:39.075 CC test/nvme/connect_stress/connect_stress.o 00:03:39.075 CC test/nvme/boot_partition/boot_partition.o 00:03:39.075 LINK cmb_copy 00:03:39.075 CC test/nvme/compliance/nvme_compliance.o 00:03:39.075 CC test/nvme/fused_ordering/fused_ordering.o 00:03:39.075 CXX test/cpp_headers/blob_bdev.o 00:03:39.075 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:39.075 CC test/nvme/fdp/fdp.o 00:03:39.075 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.075 CXX test/cpp_headers/blobfs.o 00:03:39.075 LINK env_dpdk_post_init 00:03:39.075 CXX test/cpp_headers/blob.o 00:03:39.075 CXX test/cpp_headers/conf.o 00:03:39.075 LINK reset 00:03:39.075 CXX test/cpp_headers/config.o 00:03:39.075 CXX test/cpp_headers/cpuset.o 00:03:39.075 CXX test/cpp_headers/crc16.o 00:03:39.075 CC test/nvme/cuse/cuse.o 00:03:39.075 CXX test/cpp_headers/crc32.o 00:03:39.075 CXX test/cpp_headers/crc64.o 00:03:39.075 CXX test/cpp_headers/dif.o 00:03:39.075 CXX test/cpp_headers/dma.o 00:03:39.340 LINK sgl 00:03:39.341 CXX test/cpp_headers/endian.o 00:03:39.341 LINK pmr_persistence 00:03:39.341 LINK err_injection 00:03:39.341 CXX test/cpp_headers/env_dpdk.o 00:03:39.341 CXX test/cpp_headers/env.o 00:03:39.341 CXX test/cpp_headers/event.o 00:03:39.341 LINK startup 00:03:39.341 CXX test/cpp_headers/fd_group.o 00:03:39.341 LINK spdk_nvme_perf 00:03:39.341 LINK nvme_dp 00:03:39.341 LINK scheduler 00:03:39.341 CXX test/cpp_headers/fd.o 00:03:39.341 CXX test/cpp_headers/file.o 00:03:39.341 LINK reserve 00:03:39.341 CXX test/cpp_headers/ftl.o 00:03:39.341 LINK overhead 00:03:39.341 LINK connect_stress 00:03:39.341 LINK boot_partition 00:03:39.341 LINK abort 00:03:39.341 LINK spdk_nvme_identify 00:03:39.341 LINK simple_copy 00:03:39.341 CXX test/cpp_headers/gpt_spec.o 00:03:39.341 CXX test/cpp_headers/hexlify.o 00:03:39.341 LINK bdevperf 00:03:39.341 LINK spdk_top 00:03:39.341 CXX test/cpp_headers/histogram_data.o 00:03:39.341 CXX test/cpp_headers/idxd.o 00:03:39.341 CXX test/cpp_headers/idxd_spec.o 00:03:39.341 LINK doorbell_aers 00:03:39.341 CXX test/cpp_headers/init.o 00:03:39.341 CXX test/cpp_headers/ioat.o 00:03:39.603 CXX test/cpp_headers/ioat_spec.o 00:03:39.603 LINK fused_ordering 00:03:39.603 LINK pci_ut 00:03:39.603 CXX test/cpp_headers/iscsi_spec.o 00:03:39.603 CXX test/cpp_headers/json.o 00:03:39.603 LINK vhost_fuzz 00:03:39.603 CXX test/cpp_headers/jsonrpc.o 00:03:39.603 CXX test/cpp_headers/likely.o 00:03:39.603 CXX test/cpp_headers/log.o 00:03:39.603 CXX test/cpp_headers/lvol.o 00:03:39.603 CXX test/cpp_headers/memory.o 00:03:39.603 CXX test/cpp_headers/mmio.o 00:03:39.603 CXX test/cpp_headers/nbd.o 00:03:39.603 LINK spdk_bdev 00:03:39.603 CXX test/cpp_headers/notify.o 00:03:39.603 CXX test/cpp_headers/nvme.o 00:03:39.603 LINK nvme_compliance 00:03:39.603 CXX test/cpp_headers/nvme_intel.o 00:03:39.603 CXX test/cpp_headers/nvme_ocssd.o 00:03:39.603 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:39.603 CXX test/cpp_headers/nvme_spec.o 00:03:39.603 CXX test/cpp_headers/nvme_zns.o 00:03:39.603 CXX test/cpp_headers/nvmf_cmd.o 00:03:39.603 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:39.603 CXX test/cpp_headers/nvmf.o 00:03:39.603 CXX test/cpp_headers/nvmf_spec.o 00:03:39.603 CXX test/cpp_headers/nvmf_transport.o 00:03:39.603 CXX test/cpp_headers/opal_spec.o 00:03:39.603 CXX test/cpp_headers/opal.o 00:03:39.603 CXX test/cpp_headers/pci_ids.o 00:03:39.603 CXX test/cpp_headers/pipe.o 00:03:39.603 CXX test/cpp_headers/queue.o 00:03:39.603 CXX test/cpp_headers/reduce.o 00:03:39.603 LINK fdp 00:03:39.603 LINK memory_ut 00:03:39.603 CXX test/cpp_headers/rpc.o 00:03:39.865 CXX test/cpp_headers/scheduler.o 00:03:39.865 CXX test/cpp_headers/scsi.o 00:03:39.865 CXX test/cpp_headers/scsi_spec.o 00:03:39.865 CXX test/cpp_headers/sock.o 00:03:39.865 CXX test/cpp_headers/stdinc.o 00:03:39.865 CXX test/cpp_headers/string.o 00:03:39.865 CXX test/cpp_headers/thread.o 00:03:39.865 CXX test/cpp_headers/trace.o 00:03:39.865 CXX test/cpp_headers/trace_parser.o 00:03:39.865 CXX test/cpp_headers/tree.o 00:03:39.865 CXX test/cpp_headers/ublk.o 00:03:39.865 CXX test/cpp_headers/util.o 00:03:39.865 CXX test/cpp_headers/uuid.o 00:03:39.865 CXX test/cpp_headers/version.o 00:03:39.865 CXX test/cpp_headers/vfio_user_pci.o 00:03:39.865 CXX test/cpp_headers/vfio_user_spec.o 00:03:39.865 CXX test/cpp_headers/vhost.o 00:03:39.865 CXX test/cpp_headers/vmd.o 00:03:39.865 CXX test/cpp_headers/xor.o 00:03:39.865 CXX test/cpp_headers/zipf.o 00:03:40.800 LINK cuse 00:03:41.057 LINK iscsi_fuzz 00:03:43.589 LINK esnap 00:03:43.589 00:03:43.589 real 0m38.069s 00:03:43.589 user 7m16.533s 00:03:43.589 sys 1m38.247s 00:03:43.589 02:51:38 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:43.589 02:51:38 -- common/autotest_common.sh@10 -- $ set +x 00:03:43.589 ************************************ 00:03:43.589 END TEST make 00:03:43.589 ************************************ 00:03:43.589 02:51:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:43.589 02:51:38 -- nvmf/common.sh@7 -- # uname -s 00:03:43.589 02:51:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:43.589 02:51:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:43.589 02:51:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:43.589 02:51:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:43.589 02:51:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:43.589 02:51:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:43.589 02:51:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:43.589 02:51:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:43.589 02:51:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:43.589 02:51:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:43.589 02:51:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:43.589 02:51:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:43.589 02:51:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:43.589 02:51:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:43.589 02:51:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:43.589 02:51:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:43.589 02:51:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:43.589 02:51:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:43.589 02:51:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:43.589 02:51:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.589 02:51:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.589 02:51:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.589 02:51:38 -- paths/export.sh@5 -- # export PATH 00:03:43.589 02:51:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:43.589 02:51:38 -- nvmf/common.sh@46 -- # : 0 00:03:43.589 02:51:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:43.589 02:51:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:43.589 02:51:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:43.589 02:51:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:43.589 02:51:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:43.589 02:51:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:43.589 02:51:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:43.589 02:51:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:43.589 02:51:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:43.589 02:51:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:43.589 02:51:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:43.589 02:51:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:43.589 02:51:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:43.589 02:51:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:43.589 02:51:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:43.589 02:51:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:43.589 02:51:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:43.589 02:51:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:43.589 02:51:38 -- spdk/autotest.sh@48 -- # udevadm_pid=1848941 00:03:43.589 02:51:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:43.589 02:51:38 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:43.848 02:51:38 -- spdk/autotest.sh@54 -- # echo 1848943 00:03:43.848 02:51:38 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:43.848 02:51:38 -- spdk/autotest.sh@56 -- # echo 1848944 00:03:43.848 02:51:38 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:03:43.848 02:51:38 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:43.848 02:51:38 -- spdk/autotest.sh@60 -- # echo 1848945 00:03:43.848 02:51:38 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:43.848 02:51:38 -- spdk/autotest.sh@62 -- # echo 1848946 00:03:43.848 02:51:38 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:03:43.848 02:51:38 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:43.848 02:51:38 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:43.848 02:51:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:43.848 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:03:43.848 02:51:38 -- spdk/autotest.sh@70 -- # create_test_list 00:03:43.848 02:51:38 -- common/autotest_common.sh@736 -- # xtrace_disable 00:03:43.848 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:03:43.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:43.848 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:43.848 02:51:38 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:43.848 02:51:38 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.848 02:51:38 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.848 02:51:38 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:43.848 02:51:38 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:43.848 02:51:38 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:43.848 02:51:38 -- common/autotest_common.sh@1440 -- # uname 00:03:43.848 02:51:38 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:03:43.848 02:51:38 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:43.848 02:51:38 -- common/autotest_common.sh@1460 -- # uname 00:03:43.848 02:51:38 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:03:43.848 02:51:38 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:03:43.848 02:51:38 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:03:43.848 02:51:38 -- spdk/autotest.sh@83 -- # hash lcov 00:03:43.848 02:51:38 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:43.848 02:51:38 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:03:43.848 --rc lcov_branch_coverage=1 00:03:43.848 --rc lcov_function_coverage=1 00:03:43.848 --rc genhtml_branch_coverage=1 00:03:43.848 --rc genhtml_function_coverage=1 00:03:43.848 --rc genhtml_legend=1 00:03:43.848 --rc geninfo_all_blocks=1 00:03:43.848 ' 00:03:43.848 02:51:38 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:03:43.848 --rc lcov_branch_coverage=1 00:03:43.848 --rc lcov_function_coverage=1 00:03:43.848 --rc genhtml_branch_coverage=1 00:03:43.848 --rc genhtml_function_coverage=1 00:03:43.848 --rc genhtml_legend=1 00:03:43.848 --rc geninfo_all_blocks=1 00:03:43.848 ' 00:03:43.848 02:51:38 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:03:43.848 --rc lcov_branch_coverage=1 00:03:43.848 --rc lcov_function_coverage=1 00:03:43.848 --rc genhtml_branch_coverage=1 00:03:43.848 --rc genhtml_function_coverage=1 00:03:43.848 --rc genhtml_legend=1 00:03:43.848 --rc geninfo_all_blocks=1 00:03:43.848 --no-external' 00:03:43.848 02:51:38 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:03:43.848 --rc lcov_branch_coverage=1 00:03:43.848 --rc lcov_function_coverage=1 00:03:43.848 --rc genhtml_branch_coverage=1 00:03:43.848 --rc genhtml_function_coverage=1 00:03:43.848 --rc genhtml_legend=1 00:03:43.848 --rc geninfo_all_blocks=1 00:03:43.848 --no-external' 00:03:43.848 02:51:38 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:43.848 lcov: LCOV version 1.14 00:03:43.848 02:51:38 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:45.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:45.251 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:45.510 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:45.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:45.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:45.511 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:45.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:45.769 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:45.769 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:45.770 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:45.770 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:00.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:00.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:00.630 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:15.495 02:52:10 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:15.495 02:52:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.495 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:04:15.495 02:52:10 -- spdk/autotest.sh@102 -- # rm -f 00:04:15.495 02:52:10 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.064 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:16.064 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.064 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.322 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.322 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.322 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.322 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.322 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.322 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.322 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.322 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.322 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.322 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.322 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.322 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.322 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.322 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.582 02:52:11 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:16.582 02:52:11 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:16.582 02:52:11 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:16.582 02:52:11 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:16.582 02:52:11 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:16.582 02:52:11 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:16.582 02:52:11 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:16.582 02:52:11 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.582 02:52:11 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:16.582 02:52:11 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:16.582 02:52:11 -- spdk/autotest.sh@121 -- # grep -v p 00:04:16.582 02:52:11 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:16.582 02:52:11 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:16.582 02:52:11 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:16.582 02:52:11 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:16.582 02:52:11 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:16.582 02:52:11 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.582 No valid GPT data, bailing 00:04:16.582 02:52:11 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.582 02:52:11 -- scripts/common.sh@393 -- # pt= 00:04:16.582 02:52:11 -- scripts/common.sh@394 -- # return 1 00:04:16.582 02:52:11 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.582 1+0 records in 00:04:16.582 1+0 records out 00:04:16.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00226869 s, 462 MB/s 00:04:16.582 02:52:11 -- spdk/autotest.sh@129 -- # sync 00:04:16.582 02:52:11 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.582 02:52:11 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.582 02:52:11 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:18.491 02:52:13 -- spdk/autotest.sh@135 -- # uname -s 00:04:18.491 02:52:13 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:18.491 02:52:13 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:18.491 02:52:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.491 02:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.491 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:04:18.491 ************************************ 00:04:18.491 START TEST setup.sh 00:04:18.491 ************************************ 00:04:18.491 02:52:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:18.491 * Looking for test storage... 00:04:18.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.491 02:52:13 -- setup/test-setup.sh@10 -- # uname -s 00:04:18.491 02:52:13 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:18.491 02:52:13 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:18.491 02:52:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:18.491 02:52:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:18.491 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:04:18.491 ************************************ 00:04:18.491 START TEST acl 00:04:18.491 ************************************ 00:04:18.491 02:52:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:18.491 * Looking for test storage... 00:04:18.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:18.491 02:52:13 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:18.491 02:52:13 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:18.491 02:52:13 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:18.491 02:52:13 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:18.491 02:52:13 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:18.491 02:52:13 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:18.491 02:52:13 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:18.491 02:52:13 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:18.491 02:52:13 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:18.491 02:52:13 -- setup/acl.sh@12 -- # devs=() 00:04:18.491 02:52:13 -- setup/acl.sh@12 -- # declare -a devs 00:04:18.491 02:52:13 -- setup/acl.sh@13 -- # drivers=() 00:04:18.491 02:52:13 -- setup/acl.sh@13 -- # declare -A drivers 00:04:18.491 02:52:13 -- setup/acl.sh@51 -- # setup reset 00:04:18.491 02:52:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:18.491 02:52:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:19.868 02:52:14 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:19.868 02:52:14 -- setup/acl.sh@16 -- # local dev driver 00:04:19.868 02:52:14 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:19.868 02:52:14 -- setup/acl.sh@15 -- # setup output status 00:04:19.868 02:52:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.868 02:52:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:20.823 Hugepages 00:04:20.823 node hugesize free / total 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 00:04:20.823 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:15 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:20.823 02:52:15 -- setup/acl.sh@20 -- # continue 00:04:20.823 02:52:15 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:16 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:20.823 02:52:16 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:20.823 02:52:16 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:20.823 02:52:16 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:20.823 02:52:16 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:20.823 02:52:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:20.823 02:52:16 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:20.823 02:52:16 -- setup/acl.sh@54 -- # run_test denied denied 00:04:20.823 02:52:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.823 02:52:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.823 02:52:16 -- common/autotest_common.sh@10 -- # set +x 00:04:20.823 ************************************ 00:04:20.823 START TEST denied 00:04:20.823 ************************************ 00:04:20.823 02:52:16 -- common/autotest_common.sh@1104 -- # denied 00:04:20.823 02:52:16 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:20.823 02:52:16 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:20.823 02:52:16 -- setup/acl.sh@38 -- # setup output config 00:04:20.823 02:52:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.823 02:52:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:22.732 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:22.732 02:52:17 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:22.732 02:52:17 -- setup/acl.sh@28 -- # local dev driver 00:04:22.732 02:52:17 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:22.732 02:52:17 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:22.732 02:52:17 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:22.732 02:52:17 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:22.732 02:52:17 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:22.732 02:52:17 -- setup/acl.sh@41 -- # setup reset 00:04:22.732 02:52:17 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:22.732 02:52:17 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:24.639 00:04:24.639 real 0m3.731s 00:04:24.639 user 0m1.139s 00:04:24.639 sys 0m1.753s 00:04:24.639 02:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.639 02:52:19 -- common/autotest_common.sh@10 -- # set +x 00:04:24.639 ************************************ 00:04:24.639 END TEST denied 00:04:24.639 ************************************ 00:04:24.639 02:52:19 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:24.639 02:52:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.639 02:52:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.639 02:52:19 -- common/autotest_common.sh@10 -- # set +x 00:04:24.639 ************************************ 00:04:24.639 START TEST allowed 00:04:24.639 ************************************ 00:04:24.639 02:52:19 -- common/autotest_common.sh@1104 -- # allowed 00:04:24.639 02:52:19 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:24.639 02:52:19 -- setup/acl.sh@45 -- # setup output config 00:04:24.639 02:52:19 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:24.639 02:52:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.639 02:52:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:27.198 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:27.198 02:52:22 -- setup/acl.sh@47 -- # verify 00:04:27.198 02:52:22 -- setup/acl.sh@28 -- # local dev driver 00:04:27.198 02:52:22 -- setup/acl.sh@48 -- # setup reset 00:04:27.198 02:52:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:27.198 02:52:22 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.592 00:04:28.592 real 0m3.932s 00:04:28.592 user 0m1.029s 00:04:28.592 sys 0m1.768s 00:04:28.592 02:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.592 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.592 ************************************ 00:04:28.592 END TEST allowed 00:04:28.592 ************************************ 00:04:28.592 00:04:28.592 real 0m10.330s 00:04:28.592 user 0m3.230s 00:04:28.592 sys 0m5.212s 00:04:28.592 02:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.592 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.592 ************************************ 00:04:28.592 END TEST acl 00:04:28.592 ************************************ 00:04:28.592 02:52:23 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:28.592 02:52:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.592 02:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.592 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.592 ************************************ 00:04:28.592 START TEST hugepages 00:04:28.592 ************************************ 00:04:28.592 02:52:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:28.592 * Looking for test storage... 00:04:28.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:28.592 02:52:23 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:28.592 02:52:23 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:28.592 02:52:23 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:28.592 02:52:23 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:28.592 02:52:23 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:28.592 02:52:23 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:28.592 02:52:23 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:28.592 02:52:23 -- setup/common.sh@18 -- # local node= 00:04:28.592 02:52:23 -- setup/common.sh@19 -- # local var val 00:04:28.592 02:52:23 -- setup/common.sh@20 -- # local mem_f mem 00:04:28.592 02:52:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:28.592 02:52:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:28.592 02:52:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:28.592 02:52:23 -- setup/common.sh@28 -- # mapfile -t mem 00:04:28.592 02:52:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:28.592 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.592 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41716688 kB' 'MemAvailable: 45227004 kB' 'Buffers: 3736 kB' 'Cached: 12226332 kB' 'SwapCached: 0 kB' 'Active: 9227052 kB' 'Inactive: 3507584 kB' 'Active(anon): 8832700 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 507840 kB' 'Mapped: 216660 kB' 'Shmem: 8328132 kB' 'KReclaimable: 204008 kB' 'Slab: 582204 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 378196 kB' 'KernelStack: 12832 kB' 'PageTables: 8768 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9956740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196372 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.593 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.593 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.594 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.594 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.852 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.852 02:52:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.852 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.852 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.852 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.852 02:52:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.852 02:52:23 -- setup/common.sh@32 -- # continue 00:04:28.852 02:52:23 -- setup/common.sh@31 -- # IFS=': ' 00:04:28.852 02:52:23 -- setup/common.sh@31 -- # read -r var val _ 00:04:28.852 02:52:23 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:28.852 02:52:23 -- setup/common.sh@33 -- # echo 2048 00:04:28.852 02:52:23 -- setup/common.sh@33 -- # return 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:28.852 02:52:23 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:28.852 02:52:23 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:28.852 02:52:23 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:28.852 02:52:23 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:28.852 02:52:23 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:28.852 02:52:23 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:28.852 02:52:23 -- setup/hugepages.sh@207 -- # get_nodes 00:04:28.852 02:52:23 -- setup/hugepages.sh@27 -- # local node 00:04:28.852 02:52:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.852 02:52:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:28.852 02:52:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:28.852 02:52:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:28.852 02:52:23 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:28.852 02:52:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:28.852 02:52:23 -- setup/hugepages.sh@208 -- # clear_hp 00:04:28.852 02:52:23 -- setup/hugepages.sh@37 -- # local node hp 00:04:28.852 02:52:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.852 02:52:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.852 02:52:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.852 02:52:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:28.852 02:52:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.852 02:52:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:28.852 02:52:23 -- setup/hugepages.sh@41 -- # echo 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:28.852 02:52:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:28.852 02:52:23 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:28.852 02:52:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:28.852 02:52:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:28.852 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:04:28.852 ************************************ 00:04:28.852 START TEST default_setup 00:04:28.852 ************************************ 00:04:28.852 02:52:23 -- common/autotest_common.sh@1104 -- # default_setup 00:04:28.852 02:52:23 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:28.852 02:52:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:28.852 02:52:23 -- setup/hugepages.sh@51 -- # shift 00:04:28.852 02:52:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:28.852 02:52:23 -- setup/hugepages.sh@52 -- # local node_ids 00:04:28.852 02:52:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:28.852 02:52:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:28.852 02:52:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:28.852 02:52:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:28.852 02:52:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:28.852 02:52:23 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:28.852 02:52:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:28.852 02:52:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:28.852 02:52:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:28.852 02:52:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:28.852 02:52:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:28.852 02:52:23 -- setup/hugepages.sh@73 -- # return 0 00:04:28.852 02:52:23 -- setup/hugepages.sh@137 -- # setup output 00:04:28.852 02:52:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:28.852 02:52:23 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.227 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.227 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:30.227 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:31.169 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:31.169 02:52:26 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:31.169 02:52:26 -- setup/hugepages.sh@89 -- # local node 00:04:31.169 02:52:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:31.169 02:52:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:31.169 02:52:26 -- setup/hugepages.sh@92 -- # local surp 00:04:31.169 02:52:26 -- setup/hugepages.sh@93 -- # local resv 00:04:31.169 02:52:26 -- setup/hugepages.sh@94 -- # local anon 00:04:31.169 02:52:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:31.169 02:52:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:31.169 02:52:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:31.169 02:52:26 -- setup/common.sh@18 -- # local node= 00:04:31.169 02:52:26 -- setup/common.sh@19 -- # local var val 00:04:31.169 02:52:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.169 02:52:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.169 02:52:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.169 02:52:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.169 02:52:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.169 02:52:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.169 02:52:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835848 kB' 'MemAvailable: 47346164 kB' 'Buffers: 3736 kB' 'Cached: 12226424 kB' 'SwapCached: 0 kB' 'Active: 9244340 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849988 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525068 kB' 'Mapped: 216724 kB' 'Shmem: 8328224 kB' 'KReclaimable: 204008 kB' 'Slab: 581328 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377320 kB' 'KernelStack: 12784 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.169 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.169 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.170 02:52:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:31.170 02:52:26 -- setup/common.sh@33 -- # echo 0 00:04:31.170 02:52:26 -- setup/common.sh@33 -- # return 0 00:04:31.170 02:52:26 -- setup/hugepages.sh@97 -- # anon=0 00:04:31.170 02:52:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:31.170 02:52:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.170 02:52:26 -- setup/common.sh@18 -- # local node= 00:04:31.170 02:52:26 -- setup/common.sh@19 -- # local var val 00:04:31.170 02:52:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.170 02:52:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.170 02:52:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.170 02:52:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.170 02:52:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.170 02:52:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.170 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43839544 kB' 'MemAvailable: 47349860 kB' 'Buffers: 3736 kB' 'Cached: 12226428 kB' 'SwapCached: 0 kB' 'Active: 9244240 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525048 kB' 'Mapped: 216788 kB' 'Shmem: 8328228 kB' 'KReclaimable: 204008 kB' 'Slab: 581392 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377384 kB' 'KernelStack: 12800 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.171 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.171 02:52:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.172 02:52:26 -- setup/common.sh@33 -- # echo 0 00:04:31.172 02:52:26 -- setup/common.sh@33 -- # return 0 00:04:31.172 02:52:26 -- setup/hugepages.sh@99 -- # surp=0 00:04:31.172 02:52:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:31.172 02:52:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:31.172 02:52:26 -- setup/common.sh@18 -- # local node= 00:04:31.172 02:52:26 -- setup/common.sh@19 -- # local var val 00:04:31.172 02:52:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.172 02:52:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.172 02:52:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.172 02:52:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.172 02:52:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.172 02:52:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43840216 kB' 'MemAvailable: 47350532 kB' 'Buffers: 3736 kB' 'Cached: 12226436 kB' 'SwapCached: 0 kB' 'Active: 9244112 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849760 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524860 kB' 'Mapped: 216712 kB' 'Shmem: 8328236 kB' 'KReclaimable: 204008 kB' 'Slab: 581352 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377344 kB' 'KernelStack: 12784 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977028 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.172 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.172 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:31.173 02:52:26 -- setup/common.sh@33 -- # echo 0 00:04:31.173 02:52:26 -- setup/common.sh@33 -- # return 0 00:04:31.173 02:52:26 -- setup/hugepages.sh@100 -- # resv=0 00:04:31.173 02:52:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:31.173 nr_hugepages=1024 00:04:31.173 02:52:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:31.173 resv_hugepages=0 00:04:31.173 02:52:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:31.173 surplus_hugepages=0 00:04:31.173 02:52:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:31.173 anon_hugepages=0 00:04:31.173 02:52:26 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.173 02:52:26 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:31.173 02:52:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:31.173 02:52:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:31.173 02:52:26 -- setup/common.sh@18 -- # local node= 00:04:31.173 02:52:26 -- setup/common.sh@19 -- # local var val 00:04:31.173 02:52:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.173 02:52:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.173 02:52:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.173 02:52:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.173 02:52:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.173 02:52:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43840216 kB' 'MemAvailable: 47350532 kB' 'Buffers: 3736 kB' 'Cached: 12226452 kB' 'SwapCached: 0 kB' 'Active: 9244124 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849772 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524860 kB' 'Mapped: 216712 kB' 'Shmem: 8328252 kB' 'KReclaimable: 204008 kB' 'Slab: 581352 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377344 kB' 'KernelStack: 12784 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.173 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.173 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.174 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.174 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:31.175 02:52:26 -- setup/common.sh@33 -- # echo 1024 00:04:31.175 02:52:26 -- setup/common.sh@33 -- # return 0 00:04:31.175 02:52:26 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:31.175 02:52:26 -- setup/hugepages.sh@112 -- # get_nodes 00:04:31.175 02:52:26 -- setup/hugepages.sh@27 -- # local node 00:04:31.175 02:52:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.175 02:52:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:31.175 02:52:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.175 02:52:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.175 02:52:26 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.175 02:52:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.175 02:52:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:31.175 02:52:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:31.175 02:52:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:31.175 02:52:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:31.175 02:52:26 -- setup/common.sh@18 -- # local node=0 00:04:31.175 02:52:26 -- setup/common.sh@19 -- # local var val 00:04:31.175 02:52:26 -- setup/common.sh@20 -- # local mem_f mem 00:04:31.175 02:52:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.175 02:52:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:31.175 02:52:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:31.175 02:52:26 -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.175 02:52:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26817108 kB' 'MemUsed: 6012776 kB' 'SwapCached: 0 kB' 'Active: 2636972 kB' 'Inactive: 110796 kB' 'Active(anon): 2526084 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501308 kB' 'Mapped: 41932 kB' 'AnonPages: 249648 kB' 'Shmem: 2279624 kB' 'KernelStack: 7576 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324468 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.175 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.175 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # continue 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # IFS=': ' 00:04:31.176 02:52:26 -- setup/common.sh@31 -- # read -r var val _ 00:04:31.176 02:52:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:31.176 02:52:26 -- setup/common.sh@33 -- # echo 0 00:04:31.176 02:52:26 -- setup/common.sh@33 -- # return 0 00:04:31.176 02:52:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:31.176 02:52:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:31.176 02:52:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:31.176 02:52:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:31.176 02:52:26 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:31.176 node0=1024 expecting 1024 00:04:31.176 02:52:26 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:31.176 00:04:31.176 real 0m2.499s 00:04:31.176 user 0m0.649s 00:04:31.176 sys 0m0.923s 00:04:31.176 02:52:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.176 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:04:31.176 ************************************ 00:04:31.176 END TEST default_setup 00:04:31.176 ************************************ 00:04:31.176 02:52:26 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:31.176 02:52:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.176 02:52:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.176 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:04:31.176 ************************************ 00:04:31.176 START TEST per_node_1G_alloc 00:04:31.176 ************************************ 00:04:31.176 02:52:26 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:31.176 02:52:26 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:31.176 02:52:26 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:31.176 02:52:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:31.176 02:52:26 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:31.176 02:52:26 -- setup/hugepages.sh@51 -- # shift 00:04:31.176 02:52:26 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:31.176 02:52:26 -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.176 02:52:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.176 02:52:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:31.176 02:52:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:31.176 02:52:26 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:31.176 02:52:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.176 02:52:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:31.176 02:52:26 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.176 02:52:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.176 02:52:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.176 02:52:26 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:31.176 02:52:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.176 02:52:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.176 02:52:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.176 02:52:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:31.176 02:52:26 -- setup/hugepages.sh@73 -- # return 0 00:04:31.176 02:52:26 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:31.176 02:52:26 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:31.176 02:52:26 -- setup/hugepages.sh@146 -- # setup output 00:04:31.176 02:52:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.176 02:52:26 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.555 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.555 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.555 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.555 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.555 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.555 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.555 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.555 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.555 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.555 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.555 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.555 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.555 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.555 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.555 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.555 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.555 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.555 02:52:27 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:32.555 02:52:27 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:32.555 02:52:27 -- setup/hugepages.sh@89 -- # local node 00:04:32.555 02:52:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.555 02:52:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.555 02:52:27 -- setup/hugepages.sh@92 -- # local surp 00:04:32.555 02:52:27 -- setup/hugepages.sh@93 -- # local resv 00:04:32.555 02:52:27 -- setup/hugepages.sh@94 -- # local anon 00:04:32.555 02:52:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.555 02:52:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.555 02:52:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.555 02:52:27 -- setup/common.sh@18 -- # local node= 00:04:32.555 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.555 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.555 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.555 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.555 02:52:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.555 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.555 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43857016 kB' 'MemAvailable: 47367332 kB' 'Buffers: 3736 kB' 'Cached: 12226496 kB' 'SwapCached: 0 kB' 'Active: 9244568 kB' 'Inactive: 3507584 kB' 'Active(anon): 8850216 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525156 kB' 'Mapped: 216932 kB' 'Shmem: 8328296 kB' 'KReclaimable: 204008 kB' 'Slab: 581216 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377208 kB' 'KernelStack: 12768 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.555 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.555 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.556 02:52:27 -- setup/common.sh@33 -- # echo 0 00:04:32.556 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.556 02:52:27 -- setup/hugepages.sh@97 -- # anon=0 00:04:32.556 02:52:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.556 02:52:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.556 02:52:27 -- setup/common.sh@18 -- # local node= 00:04:32.556 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.556 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.556 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.556 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.556 02:52:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.556 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.556 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43858584 kB' 'MemAvailable: 47368900 kB' 'Buffers: 3736 kB' 'Cached: 12226500 kB' 'SwapCached: 0 kB' 'Active: 9244500 kB' 'Inactive: 3507584 kB' 'Active(anon): 8850148 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525116 kB' 'Mapped: 216796 kB' 'Shmem: 8328300 kB' 'KReclaimable: 204008 kB' 'Slab: 581284 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377276 kB' 'KernelStack: 12784 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.556 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.556 02:52:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.557 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.557 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.818 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.818 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.819 02:52:27 -- setup/common.sh@33 -- # echo 0 00:04:32.819 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.819 02:52:27 -- setup/hugepages.sh@99 -- # surp=0 00:04:32.819 02:52:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.819 02:52:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.819 02:52:27 -- setup/common.sh@18 -- # local node= 00:04:32.819 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.819 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.819 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.819 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.819 02:52:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.819 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.819 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43861232 kB' 'MemAvailable: 47371548 kB' 'Buffers: 3736 kB' 'Cached: 12226512 kB' 'SwapCached: 0 kB' 'Active: 9244400 kB' 'Inactive: 3507584 kB' 'Active(anon): 8850048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524964 kB' 'Mapped: 216720 kB' 'Shmem: 8328312 kB' 'KReclaimable: 204008 kB' 'Slab: 581292 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377284 kB' 'KernelStack: 12784 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.819 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.819 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.820 02:52:27 -- setup/common.sh@33 -- # echo 0 00:04:32.820 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.820 02:52:27 -- setup/hugepages.sh@100 -- # resv=0 00:04:32.820 02:52:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.820 nr_hugepages=1024 00:04:32.820 02:52:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.820 resv_hugepages=0 00:04:32.820 02:52:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.820 surplus_hugepages=0 00:04:32.820 02:52:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.820 anon_hugepages=0 00:04:32.820 02:52:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.820 02:52:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.820 02:52:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.820 02:52:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.820 02:52:27 -- setup/common.sh@18 -- # local node= 00:04:32.820 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.820 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.820 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.820 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.820 02:52:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.820 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.820 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43861232 kB' 'MemAvailable: 47371548 kB' 'Buffers: 3736 kB' 'Cached: 12226516 kB' 'SwapCached: 0 kB' 'Active: 9244116 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849764 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524676 kB' 'Mapped: 216720 kB' 'Shmem: 8328316 kB' 'KReclaimable: 204008 kB' 'Slab: 581292 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377284 kB' 'KernelStack: 12784 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9977248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.820 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.820 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.821 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.821 02:52:27 -- setup/common.sh@33 -- # echo 1024 00:04:32.821 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.821 02:52:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.821 02:52:27 -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.821 02:52:27 -- setup/hugepages.sh@27 -- # local node 00:04:32.821 02:52:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.821 02:52:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.821 02:52:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.821 02:52:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:32.821 02:52:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.821 02:52:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.821 02:52:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.821 02:52:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.821 02:52:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.821 02:52:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.821 02:52:27 -- setup/common.sh@18 -- # local node=0 00:04:32.821 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.821 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.821 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.821 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.821 02:52:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.821 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.821 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.821 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27883300 kB' 'MemUsed: 4946584 kB' 'SwapCached: 0 kB' 'Active: 2636816 kB' 'Inactive: 110796 kB' 'Active(anon): 2525928 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501308 kB' 'Mapped: 41932 kB' 'AnonPages: 249468 kB' 'Shmem: 2279624 kB' 'KernelStack: 7560 kB' 'PageTables: 5080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324420 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.822 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.822 02:52:27 -- setup/common.sh@33 -- # echo 0 00:04:32.822 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.822 02:52:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.822 02:52:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.822 02:52:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.822 02:52:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:32.822 02:52:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.822 02:52:27 -- setup/common.sh@18 -- # local node=1 00:04:32.822 02:52:27 -- setup/common.sh@19 -- # local var val 00:04:32.822 02:52:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:32.822 02:52:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.822 02:52:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:32.822 02:52:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:32.822 02:52:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.822 02:52:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.822 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15978344 kB' 'MemUsed: 11733480 kB' 'SwapCached: 0 kB' 'Active: 6607636 kB' 'Inactive: 3396788 kB' 'Active(anon): 6324172 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9728984 kB' 'Mapped: 174788 kB' 'AnonPages: 275492 kB' 'Shmem: 6048732 kB' 'KernelStack: 5224 kB' 'PageTables: 3252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111188 kB' 'Slab: 256872 kB' 'SReclaimable: 111188 kB' 'SUnreclaim: 145684 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.823 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.823 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.824 02:52:27 -- setup/common.sh@32 -- # continue 00:04:32.824 02:52:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:32.824 02:52:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:32.824 02:52:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.824 02:52:27 -- setup/common.sh@33 -- # echo 0 00:04:32.824 02:52:27 -- setup/common.sh@33 -- # return 0 00:04:32.824 02:52:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.824 02:52:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.824 02:52:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.824 02:52:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:32.824 node0=512 expecting 512 00:04:32.824 02:52:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.824 02:52:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.824 02:52:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.824 02:52:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:32.824 node1=512 expecting 512 00:04:32.824 02:52:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:32.824 00:04:32.824 real 0m1.514s 00:04:32.824 user 0m0.631s 00:04:32.824 sys 0m0.850s 00:04:32.824 02:52:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.824 02:52:27 -- common/autotest_common.sh@10 -- # set +x 00:04:32.824 ************************************ 00:04:32.824 END TEST per_node_1G_alloc 00:04:32.824 ************************************ 00:04:32.824 02:52:27 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:32.824 02:52:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.824 02:52:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.824 02:52:27 -- common/autotest_common.sh@10 -- # set +x 00:04:32.824 ************************************ 00:04:32.824 START TEST even_2G_alloc 00:04:32.824 ************************************ 00:04:32.824 02:52:27 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:32.824 02:52:27 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:32.824 02:52:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:32.824 02:52:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:32.824 02:52:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:32.824 02:52:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:32.824 02:52:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:32.824 02:52:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:32.824 02:52:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:32.824 02:52:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:32.824 02:52:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:32.824 02:52:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:32.824 02:52:27 -- setup/hugepages.sh@83 -- # : 512 00:04:32.824 02:52:27 -- setup/hugepages.sh@84 -- # : 1 00:04:32.824 02:52:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:32.824 02:52:27 -- setup/hugepages.sh@83 -- # : 0 00:04:32.824 02:52:27 -- setup/hugepages.sh@84 -- # : 0 00:04:32.824 02:52:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:32.824 02:52:27 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:32.824 02:52:27 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:32.824 02:52:27 -- setup/hugepages.sh@153 -- # setup output 00:04:32.824 02:52:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:32.824 02:52:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:34.213 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.213 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.213 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.213 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.213 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.213 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.213 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.213 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.213 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.213 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:34.213 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:34.213 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:34.213 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:34.213 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:34.213 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:34.213 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:34.213 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:34.213 02:52:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:34.213 02:52:29 -- setup/hugepages.sh@89 -- # local node 00:04:34.213 02:52:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.213 02:52:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.213 02:52:29 -- setup/hugepages.sh@92 -- # local surp 00:04:34.213 02:52:29 -- setup/hugepages.sh@93 -- # local resv 00:04:34.213 02:52:29 -- setup/hugepages.sh@94 -- # local anon 00:04:34.213 02:52:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.213 02:52:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.213 02:52:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.213 02:52:29 -- setup/common.sh@18 -- # local node= 00:04:34.213 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.213 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.213 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.213 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.213 02:52:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.213 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.213 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43829012 kB' 'MemAvailable: 47339328 kB' 'Buffers: 3736 kB' 'Cached: 12226604 kB' 'SwapCached: 0 kB' 'Active: 9250252 kB' 'Inactive: 3507584 kB' 'Active(anon): 8855900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530688 kB' 'Mapped: 217612 kB' 'Shmem: 8328404 kB' 'KReclaimable: 204008 kB' 'Slab: 581056 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377048 kB' 'KernelStack: 12784 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9983564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196584 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.213 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.213 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.214 02:52:29 -- setup/common.sh@33 -- # echo 0 00:04:34.214 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.214 02:52:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.214 02:52:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.214 02:52:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.214 02:52:29 -- setup/common.sh@18 -- # local node= 00:04:34.214 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.214 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.214 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.214 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.214 02:52:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.214 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.214 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43834996 kB' 'MemAvailable: 47345312 kB' 'Buffers: 3736 kB' 'Cached: 12226604 kB' 'SwapCached: 0 kB' 'Active: 9242640 kB' 'Inactive: 3507584 kB' 'Active(anon): 8848288 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523092 kB' 'Mapped: 216744 kB' 'Shmem: 8328404 kB' 'KReclaimable: 204008 kB' 'Slab: 581040 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377032 kB' 'KernelStack: 12784 kB' 'PageTables: 8312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.214 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.214 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.215 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.215 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.216 02:52:29 -- setup/common.sh@33 -- # echo 0 00:04:34.216 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.216 02:52:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.216 02:52:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.216 02:52:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.216 02:52:29 -- setup/common.sh@18 -- # local node= 00:04:34.216 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.216 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.216 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.216 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.216 02:52:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.216 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.216 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835012 kB' 'MemAvailable: 47345328 kB' 'Buffers: 3736 kB' 'Cached: 12226616 kB' 'SwapCached: 0 kB' 'Active: 9241088 kB' 'Inactive: 3507584 kB' 'Active(anon): 8846736 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521524 kB' 'Mapped: 215856 kB' 'Shmem: 8328416 kB' 'KReclaimable: 204008 kB' 'Slab: 581048 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377040 kB' 'KernelStack: 12752 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.216 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.216 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.217 02:52:29 -- setup/common.sh@33 -- # echo 0 00:04:34.217 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.217 02:52:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.217 02:52:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.217 nr_hugepages=1024 00:04:34.217 02:52:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.217 resv_hugepages=0 00:04:34.217 02:52:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.217 surplus_hugepages=0 00:04:34.217 02:52:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.217 anon_hugepages=0 00:04:34.217 02:52:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.217 02:52:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.217 02:52:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.217 02:52:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.217 02:52:29 -- setup/common.sh@18 -- # local node= 00:04:34.217 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.217 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.217 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.217 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.217 02:52:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.217 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.217 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835404 kB' 'MemAvailable: 47345720 kB' 'Buffers: 3736 kB' 'Cached: 12226632 kB' 'SwapCached: 0 kB' 'Active: 9241112 kB' 'Inactive: 3507584 kB' 'Active(anon): 8846760 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521524 kB' 'Mapped: 215856 kB' 'Shmem: 8328432 kB' 'KReclaimable: 204008 kB' 'Slab: 581048 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377040 kB' 'KernelStack: 12752 kB' 'PageTables: 8044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963404 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.217 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.217 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.218 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.218 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.219 02:52:29 -- setup/common.sh@33 -- # echo 1024 00:04:34.219 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.219 02:52:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.219 02:52:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.219 02:52:29 -- setup/hugepages.sh@27 -- # local node 00:04:34.219 02:52:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.219 02:52:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.219 02:52:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.219 02:52:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.219 02:52:29 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.219 02:52:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.219 02:52:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.219 02:52:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.219 02:52:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.219 02:52:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.219 02:52:29 -- setup/common.sh@18 -- # local node=0 00:04:34.219 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.219 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.219 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.219 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.219 02:52:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.219 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.219 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27875304 kB' 'MemUsed: 4954580 kB' 'SwapCached: 0 kB' 'Active: 2633748 kB' 'Inactive: 110796 kB' 'Active(anon): 2522860 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501324 kB' 'Mapped: 41204 kB' 'AnonPages: 246356 kB' 'Shmem: 2279640 kB' 'KernelStack: 7544 kB' 'PageTables: 4824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324316 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231496 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.219 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.219 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@33 -- # echo 0 00:04:34.220 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.220 02:52:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.220 02:52:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.220 02:52:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.220 02:52:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.220 02:52:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.220 02:52:29 -- setup/common.sh@18 -- # local node=1 00:04:34.220 02:52:29 -- setup/common.sh@19 -- # local var val 00:04:34.220 02:52:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.220 02:52:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.220 02:52:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.220 02:52:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.220 02:52:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.220 02:52:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15961400 kB' 'MemUsed: 11750424 kB' 'SwapCached: 0 kB' 'Active: 6607388 kB' 'Inactive: 3396788 kB' 'Active(anon): 6323924 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9729072 kB' 'Mapped: 174652 kB' 'AnonPages: 275172 kB' 'Shmem: 6048820 kB' 'KernelStack: 5208 kB' 'PageTables: 3220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111188 kB' 'Slab: 256708 kB' 'SReclaimable: 111188 kB' 'SUnreclaim: 145520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.220 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.220 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # continue 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.221 02:52:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.221 02:52:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.221 02:52:29 -- setup/common.sh@33 -- # echo 0 00:04:34.221 02:52:29 -- setup/common.sh@33 -- # return 0 00:04:34.221 02:52:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.221 02:52:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.221 02:52:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.221 02:52:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.221 node0=512 expecting 512 00:04:34.221 02:52:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.221 02:52:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.221 02:52:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.221 02:52:29 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:34.221 node1=512 expecting 512 00:04:34.221 02:52:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.221 00:04:34.221 real 0m1.456s 00:04:34.221 user 0m0.638s 00:04:34.221 sys 0m0.784s 00:04:34.221 02:52:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.221 02:52:29 -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 ************************************ 00:04:34.221 END TEST even_2G_alloc 00:04:34.221 ************************************ 00:04:34.221 02:52:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:34.221 02:52:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:34.221 02:52:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:34.221 02:52:29 -- common/autotest_common.sh@10 -- # set +x 00:04:34.221 ************************************ 00:04:34.221 START TEST odd_alloc 00:04:34.221 ************************************ 00:04:34.221 02:52:29 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:34.221 02:52:29 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:34.221 02:52:29 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:34.221 02:52:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:34.221 02:52:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.221 02:52:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.221 02:52:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.221 02:52:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:34.221 02:52:29 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.221 02:52:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.221 02:52:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.221 02:52:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.221 02:52:29 -- setup/hugepages.sh@83 -- # : 513 00:04:34.221 02:52:29 -- setup/hugepages.sh@84 -- # : 1 00:04:34.221 02:52:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:34.221 02:52:29 -- setup/hugepages.sh@83 -- # : 0 00:04:34.221 02:52:29 -- setup/hugepages.sh@84 -- # : 0 00:04:34.221 02:52:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.221 02:52:29 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:34.221 02:52:29 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:34.221 02:52:29 -- setup/hugepages.sh@160 -- # setup output 00:04:34.221 02:52:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.221 02:52:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.603 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.603 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.603 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.603 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.603 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.603 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.603 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.603 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.603 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.603 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.603 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.603 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.603 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.603 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.603 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.603 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.603 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.603 02:52:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:35.603 02:52:30 -- setup/hugepages.sh@89 -- # local node 00:04:35.603 02:52:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.603 02:52:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.603 02:52:30 -- setup/hugepages.sh@92 -- # local surp 00:04:35.603 02:52:30 -- setup/hugepages.sh@93 -- # local resv 00:04:35.603 02:52:30 -- setup/hugepages.sh@94 -- # local anon 00:04:35.603 02:52:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.603 02:52:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.603 02:52:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.603 02:52:30 -- setup/common.sh@18 -- # local node= 00:04:35.603 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.603 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.603 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.603 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.603 02:52:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.603 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.603 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43835564 kB' 'MemAvailable: 47345880 kB' 'Buffers: 3736 kB' 'Cached: 12226688 kB' 'SwapCached: 0 kB' 'Active: 9242700 kB' 'Inactive: 3507584 kB' 'Active(anon): 8848348 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523716 kB' 'Mapped: 215976 kB' 'Shmem: 8328488 kB' 'KReclaimable: 204008 kB' 'Slab: 580844 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376836 kB' 'KernelStack: 12960 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9967468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196836 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.603 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.603 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.604 02:52:30 -- setup/common.sh@33 -- # echo 0 00:04:35.604 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.604 02:52:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:35.604 02:52:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.604 02:52:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.604 02:52:30 -- setup/common.sh@18 -- # local node= 00:04:35.604 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.604 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.604 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.604 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.604 02:52:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.604 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.604 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43836168 kB' 'MemAvailable: 47346484 kB' 'Buffers: 3736 kB' 'Cached: 12226688 kB' 'SwapCached: 0 kB' 'Active: 9243784 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849432 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524388 kB' 'Mapped: 215908 kB' 'Shmem: 8328488 kB' 'KReclaimable: 204008 kB' 'Slab: 580968 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376960 kB' 'KernelStack: 13184 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9966104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196804 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.604 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.604 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.605 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.605 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.606 02:52:30 -- setup/common.sh@33 -- # echo 0 00:04:35.606 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.606 02:52:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:35.606 02:52:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.606 02:52:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.606 02:52:30 -- setup/common.sh@18 -- # local node= 00:04:35.606 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.606 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.606 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.606 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.606 02:52:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.606 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.606 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43834156 kB' 'MemAvailable: 47344472 kB' 'Buffers: 3736 kB' 'Cached: 12226692 kB' 'SwapCached: 0 kB' 'Active: 9243504 kB' 'Inactive: 3507584 kB' 'Active(anon): 8849152 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524080 kB' 'Mapped: 215868 kB' 'Shmem: 8328492 kB' 'KReclaimable: 204008 kB' 'Slab: 581032 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377024 kB' 'KernelStack: 13088 kB' 'PageTables: 9700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9963480 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.606 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.606 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.607 02:52:30 -- setup/common.sh@33 -- # echo 0 00:04:35.607 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.607 02:52:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:35.607 02:52:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:35.607 nr_hugepages=1025 00:04:35.607 02:52:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.607 resv_hugepages=0 00:04:35.607 02:52:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.607 surplus_hugepages=0 00:04:35.607 02:52:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.607 anon_hugepages=0 00:04:35.607 02:52:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.607 02:52:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:35.607 02:52:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.607 02:52:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.607 02:52:30 -- setup/common.sh@18 -- # local node= 00:04:35.607 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.607 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.607 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.607 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.607 02:52:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.607 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.607 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43834408 kB' 'MemAvailable: 47344724 kB' 'Buffers: 3736 kB' 'Cached: 12226716 kB' 'SwapCached: 0 kB' 'Active: 9241472 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847120 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521484 kB' 'Mapped: 215864 kB' 'Shmem: 8328516 kB' 'KReclaimable: 204008 kB' 'Slab: 580968 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376960 kB' 'KernelStack: 12784 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9963492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.607 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.607 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.608 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.608 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.608 02:52:30 -- setup/common.sh@33 -- # echo 1025 00:04:35.608 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.608 02:52:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:35.608 02:52:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.608 02:52:30 -- setup/hugepages.sh@27 -- # local node 00:04:35.608 02:52:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.608 02:52:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.608 02:52:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.608 02:52:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:35.608 02:52:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.608 02:52:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.609 02:52:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.609 02:52:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.609 02:52:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.609 02:52:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.609 02:52:30 -- setup/common.sh@18 -- # local node=0 00:04:35.609 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.609 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.609 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.609 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.609 02:52:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.609 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.609 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27868264 kB' 'MemUsed: 4961620 kB' 'SwapCached: 0 kB' 'Active: 2633556 kB' 'Inactive: 110796 kB' 'Active(anon): 2522668 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501332 kB' 'Mapped: 41204 kB' 'AnonPages: 246152 kB' 'Shmem: 2279648 kB' 'KernelStack: 7560 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324084 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.609 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.609 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.609 02:52:30 -- setup/common.sh@33 -- # echo 0 00:04:35.609 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.610 02:52:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.610 02:52:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.610 02:52:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.610 02:52:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.610 02:52:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.610 02:52:30 -- setup/common.sh@18 -- # local node=1 00:04:35.610 02:52:30 -- setup/common.sh@19 -- # local var val 00:04:35.610 02:52:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:35.610 02:52:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.610 02:52:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.610 02:52:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.610 02:52:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.610 02:52:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15966700 kB' 'MemUsed: 11745124 kB' 'SwapCached: 0 kB' 'Active: 6607556 kB' 'Inactive: 3396788 kB' 'Active(anon): 6324092 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9729140 kB' 'Mapped: 174660 kB' 'AnonPages: 275436 kB' 'Shmem: 6048888 kB' 'KernelStack: 5224 kB' 'PageTables: 3264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111188 kB' 'Slab: 256884 kB' 'SReclaimable: 111188 kB' 'SUnreclaim: 145696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # continue 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:35.610 02:52:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:35.610 02:52:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.611 02:52:30 -- setup/common.sh@33 -- # echo 0 00:04:35.611 02:52:30 -- setup/common.sh@33 -- # return 0 00:04:35.611 02:52:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.611 02:52:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.611 02:52:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.611 02:52:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.611 02:52:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:35.611 node0=512 expecting 513 00:04:35.611 02:52:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.611 02:52:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.611 02:52:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.611 02:52:30 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:35.611 node1=513 expecting 512 00:04:35.611 02:52:30 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:35.611 00:04:35.611 real 0m1.416s 00:04:35.611 user 0m0.597s 00:04:35.611 sys 0m0.782s 00:04:35.611 02:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.611 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:04:35.611 ************************************ 00:04:35.611 END TEST odd_alloc 00:04:35.611 ************************************ 00:04:35.870 02:52:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:35.870 02:52:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:35.870 02:52:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:35.870 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:04:35.870 ************************************ 00:04:35.870 START TEST custom_alloc 00:04:35.870 ************************************ 00:04:35.870 02:52:30 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:35.870 02:52:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:35.870 02:52:30 -- setup/hugepages.sh@169 -- # local node 00:04:35.870 02:52:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:35.870 02:52:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:35.870 02:52:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:35.870 02:52:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:35.870 02:52:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:35.870 02:52:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.870 02:52:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:35.870 02:52:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.870 02:52:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:35.870 02:52:30 -- setup/hugepages.sh@83 -- # : 256 00:04:35.870 02:52:30 -- setup/hugepages.sh@84 -- # : 1 00:04:35.870 02:52:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:35.870 02:52:30 -- setup/hugepages.sh@83 -- # : 0 00:04:35.870 02:52:30 -- setup/hugepages.sh@84 -- # : 0 00:04:35.870 02:52:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:35.870 02:52:30 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:35.870 02:52:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.870 02:52:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.870 02:52:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.870 02:52:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.870 02:52:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.870 02:52:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.870 02:52:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:35.870 02:52:30 -- setup/hugepages.sh@78 -- # return 0 00:04:35.870 02:52:30 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:35.870 02:52:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:35.870 02:52:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.870 02:52:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:35.870 02:52:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:35.870 02:52:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.870 02:52:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.870 02:52:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.870 02:52:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.870 02:52:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:35.870 02:52:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.870 02:52:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:35.870 02:52:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:35.870 02:52:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:35.870 02:52:30 -- setup/hugepages.sh@78 -- # return 0 00:04:35.870 02:52:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:35.870 02:52:30 -- setup/hugepages.sh@187 -- # setup output 00:04:35.870 02:52:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.870 02:52:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.808 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.808 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:36.808 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.808 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.808 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.808 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.808 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.808 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.808 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:36.808 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:36.808 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:36.808 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:36.809 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:36.809 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:36.809 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:36.809 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:36.809 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.069 02:52:32 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:37.069 02:52:32 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:37.069 02:52:32 -- setup/hugepages.sh@89 -- # local node 00:04:37.069 02:52:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.069 02:52:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.069 02:52:32 -- setup/hugepages.sh@92 -- # local surp 00:04:37.069 02:52:32 -- setup/hugepages.sh@93 -- # local resv 00:04:37.069 02:52:32 -- setup/hugepages.sh@94 -- # local anon 00:04:37.069 02:52:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.069 02:52:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.069 02:52:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.069 02:52:32 -- setup/common.sh@18 -- # local node= 00:04:37.069 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.069 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.069 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.069 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.069 02:52:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.069 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.069 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42768972 kB' 'MemAvailable: 46279288 kB' 'Buffers: 3736 kB' 'Cached: 12226784 kB' 'SwapCached: 0 kB' 'Active: 9241844 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847492 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522056 kB' 'Mapped: 215868 kB' 'Shmem: 8328584 kB' 'KReclaimable: 204008 kB' 'Slab: 580916 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376908 kB' 'KernelStack: 12768 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9963676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196708 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.069 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.069 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.070 02:52:32 -- setup/common.sh@33 -- # echo 0 00:04:37.070 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.070 02:52:32 -- setup/hugepages.sh@97 -- # anon=0 00:04:37.070 02:52:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.070 02:52:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.070 02:52:32 -- setup/common.sh@18 -- # local node= 00:04:37.070 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.070 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.070 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.070 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.070 02:52:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.070 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.070 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42769592 kB' 'MemAvailable: 46279908 kB' 'Buffers: 3736 kB' 'Cached: 12226784 kB' 'SwapCached: 0 kB' 'Active: 9242160 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847808 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522392 kB' 'Mapped: 215944 kB' 'Shmem: 8328584 kB' 'KReclaimable: 204008 kB' 'Slab: 580988 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376980 kB' 'KernelStack: 12816 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9963688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.070 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.070 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.071 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.071 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.072 02:52:32 -- setup/common.sh@33 -- # echo 0 00:04:37.072 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.072 02:52:32 -- setup/hugepages.sh@99 -- # surp=0 00:04:37.072 02:52:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.072 02:52:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.072 02:52:32 -- setup/common.sh@18 -- # local node= 00:04:37.072 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.072 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.072 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.072 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.072 02:52:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.072 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.072 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42769724 kB' 'MemAvailable: 46280040 kB' 'Buffers: 3736 kB' 'Cached: 12226796 kB' 'SwapCached: 0 kB' 'Active: 9241668 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847316 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521944 kB' 'Mapped: 215868 kB' 'Shmem: 8328596 kB' 'KReclaimable: 204008 kB' 'Slab: 580964 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376956 kB' 'KernelStack: 12800 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9963700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.072 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.072 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.073 02:52:32 -- setup/common.sh@33 -- # echo 0 00:04:37.073 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.073 02:52:32 -- setup/hugepages.sh@100 -- # resv=0 00:04:37.073 02:52:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:37.073 nr_hugepages=1536 00:04:37.073 02:52:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.073 resv_hugepages=0 00:04:37.073 02:52:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.073 surplus_hugepages=0 00:04:37.073 02:52:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.073 anon_hugepages=0 00:04:37.073 02:52:32 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:37.073 02:52:32 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:37.073 02:52:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.073 02:52:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.073 02:52:32 -- setup/common.sh@18 -- # local node= 00:04:37.073 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.073 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.073 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.073 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.073 02:52:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.073 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.073 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42770364 kB' 'MemAvailable: 46280680 kB' 'Buffers: 3736 kB' 'Cached: 12226812 kB' 'SwapCached: 0 kB' 'Active: 9241692 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521944 kB' 'Mapped: 215868 kB' 'Shmem: 8328612 kB' 'KReclaimable: 204008 kB' 'Slab: 580964 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376956 kB' 'KernelStack: 12800 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9963716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.073 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.073 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.074 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.074 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.075 02:52:32 -- setup/common.sh@33 -- # echo 1536 00:04:37.075 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.075 02:52:32 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:37.075 02:52:32 -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.075 02:52:32 -- setup/hugepages.sh@27 -- # local node 00:04:37.075 02:52:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.075 02:52:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.075 02:52:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.075 02:52:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.075 02:52:32 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.075 02:52:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.075 02:52:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.075 02:52:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.075 02:52:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.075 02:52:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.075 02:52:32 -- setup/common.sh@18 -- # local node=0 00:04:37.075 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.075 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.075 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.075 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.075 02:52:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.075 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.075 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 27864428 kB' 'MemUsed: 4965456 kB' 'SwapCached: 0 kB' 'Active: 2633292 kB' 'Inactive: 110796 kB' 'Active(anon): 2522404 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501332 kB' 'Mapped: 41204 kB' 'AnonPages: 245888 kB' 'Shmem: 2279648 kB' 'KernelStack: 7560 kB' 'PageTables: 4724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324120 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231300 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.075 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.075 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@33 -- # echo 0 00:04:37.335 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.335 02:52:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.335 02:52:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.335 02:52:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.335 02:52:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.335 02:52:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.335 02:52:32 -- setup/common.sh@18 -- # local node=1 00:04:37.335 02:52:32 -- setup/common.sh@19 -- # local var val 00:04:37.335 02:52:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.335 02:52:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.335 02:52:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.335 02:52:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.335 02:52:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.335 02:52:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 14906520 kB' 'MemUsed: 12805304 kB' 'SwapCached: 0 kB' 'Active: 6608436 kB' 'Inactive: 3396788 kB' 'Active(anon): 6324972 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396788 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9729244 kB' 'Mapped: 174664 kB' 'AnonPages: 276060 kB' 'Shmem: 6048992 kB' 'KernelStack: 5240 kB' 'PageTables: 3304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 111188 kB' 'Slab: 256844 kB' 'SReclaimable: 111188 kB' 'SUnreclaim: 145656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.335 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.335 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # continue 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.336 02:52:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.336 02:52:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.336 02:52:32 -- setup/common.sh@33 -- # echo 0 00:04:37.336 02:52:32 -- setup/common.sh@33 -- # return 0 00:04:37.336 02:52:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.336 02:52:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.336 02:52:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.336 02:52:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.336 02:52:32 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.336 node0=512 expecting 512 00:04:37.336 02:52:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.336 02:52:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.336 02:52:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.336 02:52:32 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:37.336 node1=1024 expecting 1024 00:04:37.336 02:52:32 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:37.336 00:04:37.336 real 0m1.491s 00:04:37.336 user 0m0.640s 00:04:37.336 sys 0m0.817s 00:04:37.336 02:52:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.336 02:52:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.336 ************************************ 00:04:37.336 END TEST custom_alloc 00:04:37.336 ************************************ 00:04:37.336 02:52:32 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:37.336 02:52:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:37.336 02:52:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:37.336 02:52:32 -- common/autotest_common.sh@10 -- # set +x 00:04:37.336 ************************************ 00:04:37.336 START TEST no_shrink_alloc 00:04:37.336 ************************************ 00:04:37.336 02:52:32 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:37.336 02:52:32 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:37.336 02:52:32 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:37.336 02:52:32 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:37.336 02:52:32 -- setup/hugepages.sh@51 -- # shift 00:04:37.336 02:52:32 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:37.336 02:52:32 -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.336 02:52:32 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.336 02:52:32 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:37.336 02:52:32 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:37.336 02:52:32 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:37.336 02:52:32 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.336 02:52:32 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:37.336 02:52:32 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.336 02:52:32 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.336 02:52:32 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.336 02:52:32 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:37.336 02:52:32 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.336 02:52:32 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:37.336 02:52:32 -- setup/hugepages.sh@73 -- # return 0 00:04:37.336 02:52:32 -- setup/hugepages.sh@198 -- # setup output 00:04:37.336 02:52:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.336 02:52:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.274 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.274 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.274 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.274 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.274 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.274 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.274 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.274 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.274 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.274 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.274 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.274 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.274 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.274 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.274 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.274 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.274 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.536 02:52:33 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:38.536 02:52:33 -- setup/hugepages.sh@89 -- # local node 00:04:38.536 02:52:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.536 02:52:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.536 02:52:33 -- setup/hugepages.sh@92 -- # local surp 00:04:38.536 02:52:33 -- setup/hugepages.sh@93 -- # local resv 00:04:38.536 02:52:33 -- setup/hugepages.sh@94 -- # local anon 00:04:38.536 02:52:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.536 02:52:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.536 02:52:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.536 02:52:33 -- setup/common.sh@18 -- # local node= 00:04:38.536 02:52:33 -- setup/common.sh@19 -- # local var val 00:04:38.536 02:52:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.536 02:52:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.536 02:52:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.536 02:52:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.536 02:52:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.536 02:52:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43809432 kB' 'MemAvailable: 47319748 kB' 'Buffers: 3736 kB' 'Cached: 12226880 kB' 'SwapCached: 0 kB' 'Active: 9242028 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847676 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522184 kB' 'Mapped: 215916 kB' 'Shmem: 8328680 kB' 'KReclaimable: 204008 kB' 'Slab: 580900 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376892 kB' 'KernelStack: 12784 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.536 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.536 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.537 02:52:33 -- setup/common.sh@33 -- # echo 0 00:04:38.537 02:52:33 -- setup/common.sh@33 -- # return 0 00:04:38.537 02:52:33 -- setup/hugepages.sh@97 -- # anon=0 00:04:38.537 02:52:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.537 02:52:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.537 02:52:33 -- setup/common.sh@18 -- # local node= 00:04:38.537 02:52:33 -- setup/common.sh@19 -- # local var val 00:04:38.537 02:52:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.537 02:52:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.537 02:52:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.537 02:52:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.537 02:52:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.537 02:52:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43817396 kB' 'MemAvailable: 47327712 kB' 'Buffers: 3736 kB' 'Cached: 12226880 kB' 'SwapCached: 0 kB' 'Active: 9242312 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847960 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522468 kB' 'Mapped: 215924 kB' 'Shmem: 8328680 kB' 'KReclaimable: 204008 kB' 'Slab: 580884 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376876 kB' 'KernelStack: 12768 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.537 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.537 02:52:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.538 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.538 02:52:33 -- setup/common.sh@33 -- # echo 0 00:04:38.538 02:52:33 -- setup/common.sh@33 -- # return 0 00:04:38.538 02:52:33 -- setup/hugepages.sh@99 -- # surp=0 00:04:38.538 02:52:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.538 02:52:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.538 02:52:33 -- setup/common.sh@18 -- # local node= 00:04:38.538 02:52:33 -- setup/common.sh@19 -- # local var val 00:04:38.538 02:52:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.538 02:52:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.538 02:52:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.538 02:52:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.538 02:52:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.538 02:52:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.538 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43817984 kB' 'MemAvailable: 47328300 kB' 'Buffers: 3736 kB' 'Cached: 12226892 kB' 'SwapCached: 0 kB' 'Active: 9241944 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847592 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522096 kB' 'Mapped: 215876 kB' 'Shmem: 8328692 kB' 'KReclaimable: 204008 kB' 'Slab: 580968 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376960 kB' 'KernelStack: 12832 kB' 'PageTables: 8068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963928 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.539 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.539 02:52:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.540 02:52:33 -- setup/common.sh@33 -- # echo 0 00:04:38.540 02:52:33 -- setup/common.sh@33 -- # return 0 00:04:38.540 02:52:33 -- setup/hugepages.sh@100 -- # resv=0 00:04:38.540 02:52:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:38.540 nr_hugepages=1024 00:04:38.540 02:52:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.540 resv_hugepages=0 00:04:38.540 02:52:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.540 surplus_hugepages=0 00:04:38.540 02:52:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.540 anon_hugepages=0 00:04:38.540 02:52:33 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.540 02:52:33 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:38.540 02:52:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.540 02:52:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.540 02:52:33 -- setup/common.sh@18 -- # local node= 00:04:38.540 02:52:33 -- setup/common.sh@19 -- # local var val 00:04:38.540 02:52:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.540 02:52:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.540 02:52:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.540 02:52:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.540 02:52:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.540 02:52:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43817984 kB' 'MemAvailable: 47328300 kB' 'Buffers: 3736 kB' 'Cached: 12226908 kB' 'SwapCached: 0 kB' 'Active: 9241988 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522108 kB' 'Mapped: 215876 kB' 'Shmem: 8328708 kB' 'KReclaimable: 204008 kB' 'Slab: 580968 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 376960 kB' 'KernelStack: 12800 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963940 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.540 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.540 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.541 02:52:33 -- setup/common.sh@33 -- # echo 1024 00:04:38.541 02:52:33 -- setup/common.sh@33 -- # return 0 00:04:38.541 02:52:33 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:38.541 02:52:33 -- setup/hugepages.sh@112 -- # get_nodes 00:04:38.541 02:52:33 -- setup/hugepages.sh@27 -- # local node 00:04:38.541 02:52:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.541 02:52:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:38.541 02:52:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:38.541 02:52:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:38.541 02:52:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:38.541 02:52:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:38.541 02:52:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.541 02:52:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.541 02:52:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:38.541 02:52:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.541 02:52:33 -- setup/common.sh@18 -- # local node=0 00:04:38.541 02:52:33 -- setup/common.sh@19 -- # local var val 00:04:38.541 02:52:33 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.541 02:52:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.541 02:52:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:38.541 02:52:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:38.541 02:52:33 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.541 02:52:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26811088 kB' 'MemUsed: 6018796 kB' 'SwapCached: 0 kB' 'Active: 2633236 kB' 'Inactive: 110796 kB' 'Active(anon): 2522348 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501336 kB' 'Mapped: 41640 kB' 'AnonPages: 245788 kB' 'Shmem: 2279652 kB' 'KernelStack: 7560 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 323964 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231144 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.541 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.541 02:52:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # continue 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.542 02:52:33 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.542 02:52:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.542 02:52:33 -- setup/common.sh@33 -- # echo 0 00:04:38.542 02:52:33 -- setup/common.sh@33 -- # return 0 00:04:38.542 02:52:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.542 02:52:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.542 02:52:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.542 02:52:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.542 02:52:33 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:38.542 node0=1024 expecting 1024 00:04:38.542 02:52:33 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:38.542 02:52:33 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:38.542 02:52:33 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:38.542 02:52:33 -- setup/hugepages.sh@202 -- # setup output 00:04:38.542 02:52:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.542 02:52:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:39.923 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.923 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:39.923 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.923 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.923 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.923 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.923 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.923 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.923 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.923 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:39.923 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:39.923 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:39.923 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:39.923 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:39.923 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:39.923 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:39.923 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:39.923 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:39.923 02:52:34 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:39.923 02:52:34 -- setup/hugepages.sh@89 -- # local node 00:04:39.923 02:52:34 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:39.923 02:52:34 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:39.923 02:52:34 -- setup/hugepages.sh@92 -- # local surp 00:04:39.923 02:52:34 -- setup/hugepages.sh@93 -- # local resv 00:04:39.923 02:52:34 -- setup/hugepages.sh@94 -- # local anon 00:04:39.923 02:52:34 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:39.923 02:52:34 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:39.923 02:52:34 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:39.923 02:52:34 -- setup/common.sh@18 -- # local node= 00:04:39.923 02:52:34 -- setup/common.sh@19 -- # local var val 00:04:39.923 02:52:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.923 02:52:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.923 02:52:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.923 02:52:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.923 02:52:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.923 02:52:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.923 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.923 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43777604 kB' 'MemAvailable: 47287920 kB' 'Buffers: 3736 kB' 'Cached: 12226960 kB' 'SwapCached: 0 kB' 'Active: 9247616 kB' 'Inactive: 3507584 kB' 'Active(anon): 8853264 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527760 kB' 'Mapped: 216396 kB' 'Shmem: 8328760 kB' 'KReclaimable: 204008 kB' 'Slab: 581028 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377020 kB' 'KernelStack: 12864 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9970232 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196696 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.924 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.924 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:39.925 02:52:34 -- setup/common.sh@33 -- # echo 0 00:04:39.925 02:52:34 -- setup/common.sh@33 -- # return 0 00:04:39.925 02:52:34 -- setup/hugepages.sh@97 -- # anon=0 00:04:39.925 02:52:34 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:39.925 02:52:34 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.925 02:52:34 -- setup/common.sh@18 -- # local node= 00:04:39.925 02:52:34 -- setup/common.sh@19 -- # local var val 00:04:39.925 02:52:34 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.925 02:52:34 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.925 02:52:34 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.925 02:52:34 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.925 02:52:34 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.925 02:52:34 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43782748 kB' 'MemAvailable: 47293064 kB' 'Buffers: 3736 kB' 'Cached: 12226960 kB' 'SwapCached: 0 kB' 'Active: 9247688 kB' 'Inactive: 3507584 kB' 'Active(anon): 8853336 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527804 kB' 'Mapped: 216540 kB' 'Shmem: 8328760 kB' 'KReclaimable: 204008 kB' 'Slab: 581076 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377068 kB' 'KernelStack: 12784 kB' 'PageTables: 7908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9970244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.925 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.925 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:34 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:34 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.926 02:52:35 -- setup/common.sh@33 -- # echo 0 00:04:39.926 02:52:35 -- setup/common.sh@33 -- # return 0 00:04:39.926 02:52:35 -- setup/hugepages.sh@99 -- # surp=0 00:04:39.926 02:52:35 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:39.926 02:52:35 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:39.926 02:52:35 -- setup/common.sh@18 -- # local node= 00:04:39.926 02:52:35 -- setup/common.sh@19 -- # local var val 00:04:39.926 02:52:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.926 02:52:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.926 02:52:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.926 02:52:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.926 02:52:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.926 02:52:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43781740 kB' 'MemAvailable: 47292056 kB' 'Buffers: 3736 kB' 'Cached: 12226972 kB' 'SwapCached: 0 kB' 'Active: 9242284 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522328 kB' 'Mapped: 215956 kB' 'Shmem: 8328772 kB' 'KReclaimable: 204008 kB' 'Slab: 581076 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377068 kB' 'KernelStack: 12816 kB' 'PageTables: 7932 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9964140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.926 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.926 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:39.927 02:52:35 -- setup/common.sh@33 -- # echo 0 00:04:39.927 02:52:35 -- setup/common.sh@33 -- # return 0 00:04:39.927 02:52:35 -- setup/hugepages.sh@100 -- # resv=0 00:04:39.927 02:52:35 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:39.927 nr_hugepages=1024 00:04:39.927 02:52:35 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:39.927 resv_hugepages=0 00:04:39.927 02:52:35 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:39.927 surplus_hugepages=0 00:04:39.927 02:52:35 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:39.927 anon_hugepages=0 00:04:39.927 02:52:35 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.927 02:52:35 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:39.927 02:52:35 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:39.927 02:52:35 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:39.927 02:52:35 -- setup/common.sh@18 -- # local node= 00:04:39.927 02:52:35 -- setup/common.sh@19 -- # local var val 00:04:39.927 02:52:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.927 02:52:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.927 02:52:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:39.927 02:52:35 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:39.927 02:52:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.927 02:52:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.927 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.927 02:52:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43781000 kB' 'MemAvailable: 47291316 kB' 'Buffers: 3736 kB' 'Cached: 12226988 kB' 'SwapCached: 0 kB' 'Active: 9242156 kB' 'Inactive: 3507584 kB' 'Active(anon): 8847804 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3507584 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522216 kB' 'Mapped: 215880 kB' 'Shmem: 8328788 kB' 'KReclaimable: 204008 kB' 'Slab: 581076 kB' 'SReclaimable: 204008 kB' 'SUnreclaim: 377068 kB' 'KernelStack: 12848 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9964152 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 37632 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1916508 kB' 'DirectMap2M: 15828992 kB' 'DirectMap1G: 51380224 kB' 00:04:39.927 02:52:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.928 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.928 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.929 02:52:35 -- setup/common.sh@33 -- # echo 1024 00:04:39.929 02:52:35 -- setup/common.sh@33 -- # return 0 00:04:39.929 02:52:35 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:39.929 02:52:35 -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.929 02:52:35 -- setup/hugepages.sh@27 -- # local node 00:04:39.929 02:52:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.929 02:52:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:39.929 02:52:35 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.929 02:52:35 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:39.929 02:52:35 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.929 02:52:35 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.929 02:52:35 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.929 02:52:35 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.929 02:52:35 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.929 02:52:35 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.929 02:52:35 -- setup/common.sh@18 -- # local node=0 00:04:39.929 02:52:35 -- setup/common.sh@19 -- # local var val 00:04:39.929 02:52:35 -- setup/common.sh@20 -- # local mem_f mem 00:04:39.929 02:52:35 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.929 02:52:35 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.929 02:52:35 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.929 02:52:35 -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.929 02:52:35 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26809876 kB' 'MemUsed: 6020008 kB' 'SwapCached: 0 kB' 'Active: 2633108 kB' 'Inactive: 110796 kB' 'Active(anon): 2522220 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110796 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2501336 kB' 'Mapped: 41204 kB' 'AnonPages: 245640 kB' 'Shmem: 2279652 kB' 'KernelStack: 7608 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 92820 kB' 'Slab: 324124 kB' 'SReclaimable: 92820 kB' 'SUnreclaim: 231304 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.929 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.929 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # continue 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # IFS=': ' 00:04:39.930 02:52:35 -- setup/common.sh@31 -- # read -r var val _ 00:04:39.930 02:52:35 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.930 02:52:35 -- setup/common.sh@33 -- # echo 0 00:04:39.930 02:52:35 -- setup/common.sh@33 -- # return 0 00:04:39.930 02:52:35 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.930 02:52:35 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.930 02:52:35 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.930 02:52:35 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.930 02:52:35 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:39.930 node0=1024 expecting 1024 00:04:39.930 02:52:35 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:39.930 00:04:39.930 real 0m2.702s 00:04:39.930 user 0m1.126s 00:04:39.930 sys 0m1.503s 00:04:39.930 02:52:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.930 02:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.930 ************************************ 00:04:39.930 END TEST no_shrink_alloc 00:04:39.930 ************************************ 00:04:39.930 02:52:35 -- setup/hugepages.sh@217 -- # clear_hp 00:04:39.930 02:52:35 -- setup/hugepages.sh@37 -- # local node hp 00:04:39.930 02:52:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.930 02:52:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.930 02:52:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.930 02:52:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.930 02:52:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.930 02:52:35 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:39.930 02:52:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.930 02:52:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.930 02:52:35 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:39.930 02:52:35 -- setup/hugepages.sh@41 -- # echo 0 00:04:39.930 02:52:35 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:39.930 02:52:35 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:39.930 00:04:39.930 real 0m11.345s 00:04:39.930 user 0m4.390s 00:04:39.930 sys 0m5.847s 00:04:39.930 02:52:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.930 02:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.930 ************************************ 00:04:39.930 END TEST hugepages 00:04:39.930 ************************************ 00:04:39.930 02:52:35 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:39.930 02:52:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.930 02:52:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.930 02:52:35 -- common/autotest_common.sh@10 -- # set +x 00:04:39.930 ************************************ 00:04:39.930 START TEST driver 00:04:39.930 ************************************ 00:04:39.930 02:52:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:40.187 * Looking for test storage... 00:04:40.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:40.187 02:52:35 -- setup/driver.sh@68 -- # setup reset 00:04:40.187 02:52:35 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.187 02:52:35 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:42.719 02:52:37 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:42.719 02:52:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.719 02:52:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.719 02:52:37 -- common/autotest_common.sh@10 -- # set +x 00:04:42.719 ************************************ 00:04:42.719 START TEST guess_driver 00:04:42.719 ************************************ 00:04:42.719 02:52:37 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:42.719 02:52:37 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:42.719 02:52:37 -- setup/driver.sh@47 -- # local fail=0 00:04:42.719 02:52:37 -- setup/driver.sh@49 -- # pick_driver 00:04:42.719 02:52:37 -- setup/driver.sh@36 -- # vfio 00:04:42.719 02:52:37 -- setup/driver.sh@21 -- # local iommu_grups 00:04:42.719 02:52:37 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:42.719 02:52:37 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:42.719 02:52:37 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:42.719 02:52:37 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:42.719 02:52:37 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:42.719 02:52:37 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:42.719 02:52:37 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:42.719 02:52:37 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:42.719 02:52:37 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:42.719 02:52:37 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:42.719 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:42.719 02:52:37 -- setup/driver.sh@30 -- # return 0 00:04:42.719 02:52:37 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:42.719 02:52:37 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:42.719 02:52:37 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:42.719 02:52:37 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:42.719 Looking for driver=vfio-pci 00:04:42.719 02:52:37 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:42.719 02:52:37 -- setup/driver.sh@45 -- # setup output config 00:04:42.719 02:52:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.719 02:52:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.654 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.654 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.654 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:43.912 02:52:38 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:43.912 02:52:38 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:43.912 02:52:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.851 02:52:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:44.851 02:52:39 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:44.851 02:52:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:44.851 02:52:39 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:44.851 02:52:39 -- setup/driver.sh@65 -- # setup reset 00:04:44.851 02:52:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.851 02:52:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.387 00:04:47.387 real 0m4.953s 00:04:47.387 user 0m1.111s 00:04:47.387 sys 0m1.987s 00:04:47.387 02:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.387 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:04:47.387 ************************************ 00:04:47.387 END TEST guess_driver 00:04:47.387 ************************************ 00:04:47.387 00:04:47.387 real 0m7.431s 00:04:47.387 user 0m1.646s 00:04:47.387 sys 0m2.946s 00:04:47.387 02:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.387 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:04:47.387 ************************************ 00:04:47.387 END TEST driver 00:04:47.387 ************************************ 00:04:47.387 02:52:42 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:47.387 02:52:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:47.387 02:52:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:47.387 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:04:47.387 ************************************ 00:04:47.387 START TEST devices 00:04:47.387 ************************************ 00:04:47.387 02:52:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:47.645 * Looking for test storage... 00:04:47.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:47.645 02:52:42 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:47.645 02:52:42 -- setup/devices.sh@192 -- # setup reset 00:04:47.645 02:52:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:47.646 02:52:42 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:49.022 02:52:44 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:49.022 02:52:44 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:49.022 02:52:44 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:49.022 02:52:44 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:49.022 02:52:44 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:49.022 02:52:44 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:49.022 02:52:44 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:49.022 02:52:44 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:49.022 02:52:44 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:49.022 02:52:44 -- setup/devices.sh@196 -- # blocks=() 00:04:49.022 02:52:44 -- setup/devices.sh@196 -- # declare -a blocks 00:04:49.022 02:52:44 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:49.022 02:52:44 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:49.022 02:52:44 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:49.022 02:52:44 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:49.022 02:52:44 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:49.022 02:52:44 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:49.022 02:52:44 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:49.022 02:52:44 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:49.022 02:52:44 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:49.022 02:52:44 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:49.022 02:52:44 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:49.022 No valid GPT data, bailing 00:04:49.022 02:52:44 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:49.022 02:52:44 -- scripts/common.sh@393 -- # pt= 00:04:49.022 02:52:44 -- scripts/common.sh@394 -- # return 1 00:04:49.022 02:52:44 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:49.022 02:52:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:49.022 02:52:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:49.022 02:52:44 -- setup/common.sh@80 -- # echo 1000204886016 00:04:49.022 02:52:44 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:49.022 02:52:44 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:49.022 02:52:44 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:49.022 02:52:44 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:49.022 02:52:44 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:49.022 02:52:44 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:49.022 02:52:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:49.022 02:52:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:49.022 02:52:44 -- common/autotest_common.sh@10 -- # set +x 00:04:49.022 ************************************ 00:04:49.022 START TEST nvme_mount 00:04:49.022 ************************************ 00:04:49.022 02:52:44 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:49.022 02:52:44 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:49.022 02:52:44 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:49.022 02:52:44 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.022 02:52:44 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.022 02:52:44 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:49.022 02:52:44 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:49.022 02:52:44 -- setup/common.sh@40 -- # local part_no=1 00:04:49.022 02:52:44 -- setup/common.sh@41 -- # local size=1073741824 00:04:49.022 02:52:44 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:49.022 02:52:44 -- setup/common.sh@44 -- # parts=() 00:04:49.022 02:52:44 -- setup/common.sh@44 -- # local parts 00:04:49.022 02:52:44 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:49.022 02:52:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.022 02:52:44 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:49.022 02:52:44 -- setup/common.sh@46 -- # (( part++ )) 00:04:49.022 02:52:44 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:49.022 02:52:44 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:49.022 02:52:44 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:49.022 02:52:44 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:49.962 Creating new GPT entries in memory. 00:04:49.962 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:49.962 other utilities. 00:04:49.962 02:52:45 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:49.962 02:52:45 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.962 02:52:45 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.962 02:52:45 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.962 02:52:45 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:51.341 Creating new GPT entries in memory. 00:04:51.341 The operation has completed successfully. 00:04:51.341 02:52:46 -- setup/common.sh@57 -- # (( part++ )) 00:04:51.341 02:52:46 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:51.341 02:52:46 -- setup/common.sh@62 -- # wait 1868757 00:04:51.341 02:52:46 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.341 02:52:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:51.341 02:52:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.341 02:52:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:51.341 02:52:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:51.341 02:52:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.341 02:52:46 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.341 02:52:46 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:51.341 02:52:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:51.341 02:52:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:51.341 02:52:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:51.341 02:52:46 -- setup/devices.sh@53 -- # local found=0 00:04:51.341 02:52:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:51.341 02:52:46 -- setup/devices.sh@56 -- # : 00:04:51.341 02:52:46 -- setup/devices.sh@59 -- # local pci status 00:04:51.341 02:52:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.341 02:52:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:51.341 02:52:46 -- setup/devices.sh@47 -- # setup output config 00:04:51.341 02:52:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:51.341 02:52:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:52.279 02:52:47 -- setup/devices.sh@63 -- # found=1 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.279 02:52:47 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.279 02:52:47 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:52.279 02:52:47 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.279 02:52:47 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.279 02:52:47 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.279 02:52:47 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:52.279 02:52:47 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.279 02:52:47 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.279 02:52:47 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:52.279 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.279 02:52:47 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.279 02:52:47 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.537 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:52.538 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:52.538 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:52.538 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:52.538 02:52:47 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:52.538 02:52:47 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:52.538 02:52:47 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.538 02:52:47 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:52.538 02:52:47 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:52.538 02:52:47 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.538 02:52:47 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.538 02:52:47 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:52.538 02:52:47 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:52.538 02:52:47 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.538 02:52:47 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:52.538 02:52:47 -- setup/devices.sh@53 -- # local found=0 00:04:52.538 02:52:47 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:52.538 02:52:47 -- setup/devices.sh@56 -- # : 00:04:52.538 02:52:47 -- setup/devices.sh@59 -- # local pci status 00:04:52.538 02:52:47 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.538 02:52:47 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:52.538 02:52:47 -- setup/devices.sh@47 -- # setup output config 00:04:52.538 02:52:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.538 02:52:47 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.503 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.503 02:52:48 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:53.503 02:52:48 -- setup/devices.sh@63 -- # found=1 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.504 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.504 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.762 02:52:48 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:53.762 02:52:48 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.762 02:52:48 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:53.762 02:52:48 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:53.762 02:52:48 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.762 02:52:48 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:53.762 02:52:48 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:53.762 02:52:48 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:53.762 02:52:48 -- setup/devices.sh@50 -- # local mount_point= 00:04:53.762 02:52:48 -- setup/devices.sh@51 -- # local test_file= 00:04:53.762 02:52:48 -- setup/devices.sh@53 -- # local found=0 00:04:53.762 02:52:48 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:53.762 02:52:48 -- setup/devices.sh@59 -- # local pci status 00:04:53.762 02:52:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.762 02:52:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:53.762 02:52:48 -- setup/devices.sh@47 -- # setup output config 00:04:53.762 02:52:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.762 02:52:48 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:55.145 02:52:49 -- setup/devices.sh@63 -- # found=1 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.145 02:52:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:55.145 02:52:50 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:55.145 02:52:50 -- setup/devices.sh@68 -- # return 0 00:04:55.145 02:52:50 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:55.145 02:52:50 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:55.145 02:52:50 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:55.145 02:52:50 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:55.145 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:55.145 00:04:55.145 real 0m6.072s 00:04:55.145 user 0m1.424s 00:04:55.145 sys 0m2.230s 00:04:55.145 02:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.145 02:52:50 -- common/autotest_common.sh@10 -- # set +x 00:04:55.145 ************************************ 00:04:55.145 END TEST nvme_mount 00:04:55.145 ************************************ 00:04:55.145 02:52:50 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:55.145 02:52:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:55.145 02:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.145 02:52:50 -- common/autotest_common.sh@10 -- # set +x 00:04:55.145 ************************************ 00:04:55.145 START TEST dm_mount 00:04:55.145 ************************************ 00:04:55.145 02:52:50 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:55.145 02:52:50 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:55.145 02:52:50 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:55.145 02:52:50 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:55.145 02:52:50 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:55.145 02:52:50 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:55.145 02:52:50 -- setup/common.sh@40 -- # local part_no=2 00:04:55.145 02:52:50 -- setup/common.sh@41 -- # local size=1073741824 00:04:55.146 02:52:50 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:55.146 02:52:50 -- setup/common.sh@44 -- # parts=() 00:04:55.146 02:52:50 -- setup/common.sh@44 -- # local parts 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.146 02:52:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.146 02:52:50 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part++ )) 00:04:55.146 02:52:50 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:55.146 02:52:50 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:55.146 02:52:50 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:55.146 02:52:50 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:56.083 Creating new GPT entries in memory. 00:04:56.083 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:56.083 other utilities. 00:04:56.083 02:52:51 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:56.083 02:52:51 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:56.083 02:52:51 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:56.083 02:52:51 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:56.083 02:52:51 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.023 Creating new GPT entries in memory. 00:04:57.023 The operation has completed successfully. 00:04:57.023 02:52:52 -- setup/common.sh@57 -- # (( part++ )) 00:04:57.023 02:52:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.023 02:52:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:57.023 02:52:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:57.023 02:52:52 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:58.403 The operation has completed successfully. 00:04:58.403 02:52:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:58.403 02:52:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:58.403 02:52:53 -- setup/common.sh@62 -- # wait 1871214 00:04:58.403 02:52:53 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:58.403 02:52:53 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.403 02:52:53 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.403 02:52:53 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:58.403 02:52:53 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:58.403 02:52:53 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.403 02:52:53 -- setup/devices.sh@161 -- # break 00:04:58.403 02:52:53 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.403 02:52:53 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:58.403 02:52:53 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:58.403 02:52:53 -- setup/devices.sh@166 -- # dm=dm-0 00:04:58.403 02:52:53 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:58.403 02:52:53 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:58.403 02:52:53 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.403 02:52:53 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:58.403 02:52:53 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.403 02:52:53 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:58.403 02:52:53 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:58.403 02:52:53 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.403 02:52:53 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.403 02:52:53 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:58.404 02:52:53 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:58.404 02:52:53 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.404 02:52:53 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.404 02:52:53 -- setup/devices.sh@53 -- # local found=0 00:04:58.404 02:52:53 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.404 02:52:53 -- setup/devices.sh@56 -- # : 00:04:58.404 02:52:53 -- setup/devices.sh@59 -- # local pci status 00:04:58.404 02:52:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.404 02:52:53 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:58.404 02:52:53 -- setup/devices.sh@47 -- # setup output config 00:04:58.404 02:52:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.404 02:52:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:59.336 02:52:54 -- setup/devices.sh@63 -- # found=1 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.336 02:52:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:59.336 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.648 02:52:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:59.648 02:52:54 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:59.648 02:52:54 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:59.648 02:52:54 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:59.648 02:52:54 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:59.648 02:52:54 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:59.648 02:52:54 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:59.648 02:52:54 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:59.648 02:52:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:59.648 02:52:54 -- setup/devices.sh@50 -- # local mount_point= 00:04:59.648 02:52:54 -- setup/devices.sh@51 -- # local test_file= 00:04:59.648 02:52:54 -- setup/devices.sh@53 -- # local found=0 00:04:59.648 02:52:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:59.648 02:52:54 -- setup/devices.sh@59 -- # local pci status 00:04:59.648 02:52:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:59.648 02:52:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:59.648 02:52:54 -- setup/devices.sh@47 -- # setup output config 00:04:59.648 02:52:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.648 02:52:54 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:00.584 02:52:55 -- setup/devices.sh@63 -- # found=1 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.584 02:52:55 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:00.584 02:52:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.841 02:52:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.841 02:52:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:00.841 02:52:55 -- setup/devices.sh@68 -- # return 0 00:05:00.841 02:52:55 -- setup/devices.sh@187 -- # cleanup_dm 00:05:00.841 02:52:55 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:00.841 02:52:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:00.841 02:52:55 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:00.841 02:52:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.841 02:52:56 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:00.841 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:00.841 02:52:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:00.841 02:52:56 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:00.841 00:05:00.841 real 0m5.815s 00:05:00.841 user 0m1.024s 00:05:00.841 sys 0m1.645s 00:05:00.841 02:52:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.841 02:52:56 -- common/autotest_common.sh@10 -- # set +x 00:05:00.841 ************************************ 00:05:00.841 END TEST dm_mount 00:05:00.841 ************************************ 00:05:00.841 02:52:56 -- setup/devices.sh@1 -- # cleanup 00:05:00.841 02:52:56 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:00.841 02:52:56 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.841 02:52:56 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:00.841 02:52:56 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:00.841 02:52:56 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:00.841 02:52:56 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.098 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:01.098 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:01.098 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.098 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.098 02:52:56 -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.098 02:52:56 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:01.098 02:52:56 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.098 02:52:56 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.098 02:52:56 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.098 02:52:56 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.098 02:52:56 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.098 00:05:01.098 real 0m13.744s 00:05:01.098 user 0m3.040s 00:05:01.098 sys 0m4.896s 00:05:01.098 02:52:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.098 02:52:56 -- common/autotest_common.sh@10 -- # set +x 00:05:01.098 ************************************ 00:05:01.098 END TEST devices 00:05:01.098 ************************************ 00:05:01.356 00:05:01.356 real 0m42.997s 00:05:01.356 user 0m12.360s 00:05:01.356 sys 0m19.012s 00:05:01.356 02:52:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.356 02:52:56 -- common/autotest_common.sh@10 -- # set +x 00:05:01.356 ************************************ 00:05:01.356 END TEST setup.sh 00:05:01.356 ************************************ 00:05:01.356 02:52:56 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:02.289 Hugepages 00:05:02.289 node hugesize free / total 00:05:02.289 node0 1048576kB 0 / 0 00:05:02.289 node0 2048kB 2048 / 2048 00:05:02.289 node1 1048576kB 0 / 0 00:05:02.289 node1 2048kB 0 / 0 00:05:02.289 00:05:02.290 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:02.290 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:02.290 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:02.290 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:02.290 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:02.290 02:52:57 -- spdk/autotest.sh@141 -- # uname -s 00:05:02.290 02:52:57 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:02.290 02:52:57 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:02.290 02:52:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:03.223 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:03.223 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:03.223 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:03.223 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:03.223 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:03.482 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:03.482 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:03.482 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:03.482 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:04.417 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:04.417 02:52:59 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:05.353 02:53:00 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:05.353 02:53:00 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:05.353 02:53:00 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:05.353 02:53:00 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:05.353 02:53:00 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.353 02:53:00 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.353 02:53:00 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.353 02:53:00 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:05.353 02:53:00 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.612 02:53:00 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:05.612 02:53:00 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:05.612 02:53:00 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:06.547 Waiting for block devices as requested 00:05:06.547 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:06.806 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:06.806 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:07.065 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:07.065 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:07.065 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:07.323 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:07.323 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:07.323 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:07.323 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:07.323 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:07.582 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:07.582 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:07.582 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:07.840 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:07.840 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:07.840 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:07.840 02:53:03 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:07.840 02:53:03 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:07.840 02:53:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:07.840 02:53:03 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:08.110 02:53:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:08.110 02:53:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:08.110 02:53:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:08.110 02:53:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:08.110 02:53:03 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:08.110 02:53:03 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:08.110 02:53:03 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:08.110 02:53:03 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:08.111 02:53:03 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:08.111 02:53:03 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:08.111 02:53:03 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:08.111 02:53:03 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:08.111 02:53:03 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:08.111 02:53:03 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:08.111 02:53:03 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:08.111 02:53:03 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:08.111 02:53:03 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:08.111 02:53:03 -- common/autotest_common.sh@1542 -- # continue 00:05:08.111 02:53:03 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:08.111 02:53:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:08.111 02:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.111 02:53:03 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:08.111 02:53:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.111 02:53:03 -- common/autotest_common.sh@10 -- # set +x 00:05:08.111 02:53:03 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:09.084 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.084 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.084 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.084 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.084 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.085 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.085 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.085 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:09.085 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:09.085 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:09.342 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:10.276 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:10.276 02:53:05 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:10.276 02:53:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:10.276 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.276 02:53:05 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:10.276 02:53:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:10.276 02:53:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:10.276 02:53:05 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:10.276 02:53:05 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:10.276 02:53:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:10.276 02:53:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:10.276 02:53:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:10.276 02:53:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:10.276 02:53:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:10.276 02:53:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:10.276 02:53:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:10.276 02:53:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:10.276 02:53:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:10.277 02:53:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:10.277 02:53:05 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:10.277 02:53:05 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:10.277 02:53:05 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:10.277 02:53:05 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:10.277 02:53:05 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:10.277 02:53:05 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1876508 00:05:10.277 02:53:05 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:10.277 02:53:05 -- common/autotest_common.sh@1583 -- # waitforlisten 1876508 00:05:10.277 02:53:05 -- common/autotest_common.sh@819 -- # '[' -z 1876508 ']' 00:05:10.277 02:53:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.277 02:53:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:10.277 02:53:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.277 02:53:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:10.277 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:05:10.535 [2024-07-14 02:53:05.573184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:10.535 [2024-07-14 02:53:05.573252] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1876508 ] 00:05:10.535 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.535 [2024-07-14 02:53:05.632747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.535 [2024-07-14 02:53:05.720438] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:10.535 [2024-07-14 02:53:05.720629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.469 02:53:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.469 02:53:06 -- common/autotest_common.sh@852 -- # return 0 00:05:11.469 02:53:06 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:11.469 02:53:06 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:11.469 02:53:06 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:14.752 nvme0n1 00:05:14.752 02:53:09 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:14.752 [2024-07-14 02:53:09.766178] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:14.752 [2024-07-14 02:53:09.766226] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:14.752 request: 00:05:14.752 { 00:05:14.752 "nvme_ctrlr_name": "nvme0", 00:05:14.752 "password": "test", 00:05:14.752 "method": "bdev_nvme_opal_revert", 00:05:14.752 "req_id": 1 00:05:14.752 } 00:05:14.752 Got JSON-RPC error response 00:05:14.752 response: 00:05:14.752 { 00:05:14.752 "code": -32603, 00:05:14.752 "message": "Internal error" 00:05:14.752 } 00:05:14.752 02:53:09 -- common/autotest_common.sh@1589 -- # true 00:05:14.752 02:53:09 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:14.752 02:53:09 -- common/autotest_common.sh@1593 -- # killprocess 1876508 00:05:14.752 02:53:09 -- common/autotest_common.sh@926 -- # '[' -z 1876508 ']' 00:05:14.752 02:53:09 -- common/autotest_common.sh@930 -- # kill -0 1876508 00:05:14.752 02:53:09 -- common/autotest_common.sh@931 -- # uname 00:05:14.752 02:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:14.752 02:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1876508 00:05:14.752 02:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:14.752 02:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:14.752 02:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1876508' 00:05:14.752 killing process with pid 1876508 00:05:14.752 02:53:09 -- common/autotest_common.sh@945 -- # kill 1876508 00:05:14.752 02:53:09 -- common/autotest_common.sh@950 -- # wait 1876508 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.752 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.753 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:14.754 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:16.652 02:53:11 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:16.652 02:53:11 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:16.652 02:53:11 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:16.652 02:53:11 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:16.652 02:53:11 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:16.652 02:53:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:16.652 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.652 02:53:11 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.652 02:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.652 02:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.652 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.652 ************************************ 00:05:16.652 START TEST env 00:05:16.652 ************************************ 00:05:16.652 02:53:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:16.652 * Looking for test storage... 00:05:16.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:16.652 02:53:11 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.652 02:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.652 02:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.652 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.652 ************************************ 00:05:16.653 START TEST env_memory 00:05:16.653 ************************************ 00:05:16.653 02:53:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:16.653 00:05:16.653 00:05:16.653 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.653 http://cunit.sourceforge.net/ 00:05:16.653 00:05:16.653 00:05:16.653 Suite: memory 00:05:16.653 Test: alloc and free memory map ...[2024-07-14 02:53:11.649605] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:16.653 passed 00:05:16.653 Test: mem map translation ...[2024-07-14 02:53:11.671588] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:16.653 [2024-07-14 02:53:11.671611] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:16.653 [2024-07-14 02:53:11.671656] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:16.653 [2024-07-14 02:53:11.671668] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:16.653 passed 00:05:16.653 Test: mem map registration ...[2024-07-14 02:53:11.715481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:16.653 [2024-07-14 02:53:11.715501] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:16.653 passed 00:05:16.653 Test: mem map adjacent registrations ...passed 00:05:16.653 00:05:16.653 Run Summary: Type Total Ran Passed Failed Inactive 00:05:16.653 suites 1 1 n/a 0 0 00:05:16.653 tests 4 4 4 0 0 00:05:16.653 asserts 152 152 152 0 n/a 00:05:16.653 00:05:16.653 Elapsed time = 0.147 seconds 00:05:16.653 00:05:16.653 real 0m0.156s 00:05:16.653 user 0m0.145s 00:05:16.653 sys 0m0.011s 00:05:16.653 02:53:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.653 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.653 ************************************ 00:05:16.653 END TEST env_memory 00:05:16.653 ************************************ 00:05:16.653 02:53:11 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.653 02:53:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.653 02:53:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.653 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:05:16.653 ************************************ 00:05:16.653 START TEST env_vtophys 00:05:16.653 ************************************ 00:05:16.653 02:53:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:16.653 EAL: lib.eal log level changed from notice to debug 00:05:16.653 EAL: Detected lcore 0 as core 0 on socket 0 00:05:16.653 EAL: Detected lcore 1 as core 1 on socket 0 00:05:16.653 EAL: Detected lcore 2 as core 2 on socket 0 00:05:16.653 EAL: Detected lcore 3 as core 3 on socket 0 00:05:16.653 EAL: Detected lcore 4 as core 4 on socket 0 00:05:16.653 EAL: Detected lcore 5 as core 5 on socket 0 00:05:16.653 EAL: Detected lcore 6 as core 8 on socket 0 00:05:16.653 EAL: Detected lcore 7 as core 9 on socket 0 00:05:16.653 EAL: Detected lcore 8 as core 10 on socket 0 00:05:16.653 EAL: Detected lcore 9 as core 11 on socket 0 00:05:16.653 EAL: Detected lcore 10 as core 12 on socket 0 00:05:16.653 EAL: Detected lcore 11 as core 13 on socket 0 00:05:16.653 EAL: Detected lcore 12 as core 0 on socket 1 00:05:16.653 EAL: Detected lcore 13 as core 1 on socket 1 00:05:16.653 EAL: Detected lcore 14 as core 2 on socket 1 00:05:16.653 EAL: Detected lcore 15 as core 3 on socket 1 00:05:16.653 EAL: Detected lcore 16 as core 4 on socket 1 00:05:16.653 EAL: Detected lcore 17 as core 5 on socket 1 00:05:16.653 EAL: Detected lcore 18 as core 8 on socket 1 00:05:16.653 EAL: Detected lcore 19 as core 9 on socket 1 00:05:16.653 EAL: Detected lcore 20 as core 10 on socket 1 00:05:16.653 EAL: Detected lcore 21 as core 11 on socket 1 00:05:16.653 EAL: Detected lcore 22 as core 12 on socket 1 00:05:16.653 EAL: Detected lcore 23 as core 13 on socket 1 00:05:16.653 EAL: Detected lcore 24 as core 0 on socket 0 00:05:16.653 EAL: Detected lcore 25 as core 1 on socket 0 00:05:16.653 EAL: Detected lcore 26 as core 2 on socket 0 00:05:16.653 EAL: Detected lcore 27 as core 3 on socket 0 00:05:16.653 EAL: Detected lcore 28 as core 4 on socket 0 00:05:16.653 EAL: Detected lcore 29 as core 5 on socket 0 00:05:16.653 EAL: Detected lcore 30 as core 8 on socket 0 00:05:16.653 EAL: Detected lcore 31 as core 9 on socket 0 00:05:16.653 EAL: Detected lcore 32 as core 10 on socket 0 00:05:16.653 EAL: Detected lcore 33 as core 11 on socket 0 00:05:16.653 EAL: Detected lcore 34 as core 12 on socket 0 00:05:16.653 EAL: Detected lcore 35 as core 13 on socket 0 00:05:16.653 EAL: Detected lcore 36 as core 0 on socket 1 00:05:16.653 EAL: Detected lcore 37 as core 1 on socket 1 00:05:16.653 EAL: Detected lcore 38 as core 2 on socket 1 00:05:16.653 EAL: Detected lcore 39 as core 3 on socket 1 00:05:16.653 EAL: Detected lcore 40 as core 4 on socket 1 00:05:16.653 EAL: Detected lcore 41 as core 5 on socket 1 00:05:16.653 EAL: Detected lcore 42 as core 8 on socket 1 00:05:16.653 EAL: Detected lcore 43 as core 9 on socket 1 00:05:16.653 EAL: Detected lcore 44 as core 10 on socket 1 00:05:16.653 EAL: Detected lcore 45 as core 11 on socket 1 00:05:16.653 EAL: Detected lcore 46 as core 12 on socket 1 00:05:16.653 EAL: Detected lcore 47 as core 13 on socket 1 00:05:16.653 EAL: Maximum logical cores by configuration: 128 00:05:16.653 EAL: Detected CPU lcores: 48 00:05:16.653 EAL: Detected NUMA nodes: 2 00:05:16.653 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:16.653 EAL: Detected shared linkage of DPDK 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:16.653 EAL: Registered [vdev] bus. 00:05:16.653 EAL: bus.vdev log level changed from disabled to notice 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:16.653 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:16.653 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:16.653 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:16.653 EAL: No shared files mode enabled, IPC will be disabled 00:05:16.653 EAL: No shared files mode enabled, IPC is disabled 00:05:16.653 EAL: Bus pci wants IOVA as 'DC' 00:05:16.653 EAL: Bus vdev wants IOVA as 'DC' 00:05:16.653 EAL: Buses did not request a specific IOVA mode. 00:05:16.653 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:16.653 EAL: Selected IOVA mode 'VA' 00:05:16.653 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.653 EAL: Probing VFIO support... 00:05:16.653 EAL: IOMMU type 1 (Type 1) is supported 00:05:16.653 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:16.653 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:16.653 EAL: VFIO support initialized 00:05:16.653 EAL: Ask a virtual area of 0x2e000 bytes 00:05:16.653 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:16.653 EAL: Setting up physically contiguous memory... 00:05:16.653 EAL: Setting maximum number of open files to 524288 00:05:16.653 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:16.653 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:16.653 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:16.653 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.653 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:16.653 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.653 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.653 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:16.653 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:16.653 EAL: Ask a virtual area of 0x61000 bytes 00:05:16.654 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:16.654 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:16.654 EAL: Ask a virtual area of 0x400000000 bytes 00:05:16.654 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:16.654 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:16.654 EAL: Hugepages will be freed exactly as allocated. 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: TSC frequency is ~2700000 KHz 00:05:16.654 EAL: Main lcore 0 is ready (tid=7fb391197a00;cpuset=[0]) 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 0 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 2MB 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:16.654 EAL: Mem event callback 'spdk:(nil)' registered 00:05:16.654 00:05:16.654 00:05:16.654 CUnit - A unit testing framework for C - Version 2.1-3 00:05:16.654 http://cunit.sourceforge.net/ 00:05:16.654 00:05:16.654 00:05:16.654 Suite: components_suite 00:05:16.654 Test: vtophys_malloc_test ...passed 00:05:16.654 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 4 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 4MB 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was shrunk by 4MB 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 4 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 6MB 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was shrunk by 6MB 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 4 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 10MB 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was shrunk by 10MB 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 4 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 18MB 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was shrunk by 18MB 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.654 EAL: Restoring previous memory policy: 4 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was expanded by 34MB 00:05:16.654 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.654 EAL: request: mp_malloc_sync 00:05:16.654 EAL: No shared files mode enabled, IPC is disabled 00:05:16.654 EAL: Heap on socket 0 was shrunk by 34MB 00:05:16.654 EAL: Trying to obtain current memory policy. 00:05:16.654 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.912 EAL: Restoring previous memory policy: 4 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.912 EAL: request: mp_malloc_sync 00:05:16.912 EAL: No shared files mode enabled, IPC is disabled 00:05:16.912 EAL: Heap on socket 0 was expanded by 66MB 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.912 EAL: request: mp_malloc_sync 00:05:16.912 EAL: No shared files mode enabled, IPC is disabled 00:05:16.912 EAL: Heap on socket 0 was shrunk by 66MB 00:05:16.912 EAL: Trying to obtain current memory policy. 00:05:16.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.912 EAL: Restoring previous memory policy: 4 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.912 EAL: request: mp_malloc_sync 00:05:16.912 EAL: No shared files mode enabled, IPC is disabled 00:05:16.912 EAL: Heap on socket 0 was expanded by 130MB 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.912 EAL: request: mp_malloc_sync 00:05:16.912 EAL: No shared files mode enabled, IPC is disabled 00:05:16.912 EAL: Heap on socket 0 was shrunk by 130MB 00:05:16.912 EAL: Trying to obtain current memory policy. 00:05:16.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:16.912 EAL: Restoring previous memory policy: 4 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:16.912 EAL: request: mp_malloc_sync 00:05:16.912 EAL: No shared files mode enabled, IPC is disabled 00:05:16.912 EAL: Heap on socket 0 was expanded by 258MB 00:05:16.912 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.170 EAL: request: mp_malloc_sync 00:05:17.170 EAL: No shared files mode enabled, IPC is disabled 00:05:17.170 EAL: Heap on socket 0 was shrunk by 258MB 00:05:17.170 EAL: Trying to obtain current memory policy. 00:05:17.170 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.170 EAL: Restoring previous memory policy: 4 00:05:17.170 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.170 EAL: request: mp_malloc_sync 00:05:17.170 EAL: No shared files mode enabled, IPC is disabled 00:05:17.170 EAL: Heap on socket 0 was expanded by 514MB 00:05:17.428 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.428 EAL: request: mp_malloc_sync 00:05:17.428 EAL: No shared files mode enabled, IPC is disabled 00:05:17.428 EAL: Heap on socket 0 was shrunk by 514MB 00:05:17.428 EAL: Trying to obtain current memory policy. 00:05:17.428 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:17.686 EAL: Restoring previous memory policy: 4 00:05:17.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:17.686 EAL: request: mp_malloc_sync 00:05:17.686 EAL: No shared files mode enabled, IPC is disabled 00:05:17.686 EAL: Heap on socket 0 was expanded by 1026MB 00:05:17.944 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.200 EAL: request: mp_malloc_sync 00:05:18.200 EAL: No shared files mode enabled, IPC is disabled 00:05:18.201 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:18.201 passed 00:05:18.201 00:05:18.201 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.201 suites 1 1 n/a 0 0 00:05:18.201 tests 2 2 2 0 0 00:05:18.201 asserts 497 497 497 0 n/a 00:05:18.201 00:05:18.201 Elapsed time = 1.379 seconds 00:05:18.201 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.201 EAL: request: mp_malloc_sync 00:05:18.201 EAL: No shared files mode enabled, IPC is disabled 00:05:18.201 EAL: Heap on socket 0 was shrunk by 2MB 00:05:18.201 EAL: No shared files mode enabled, IPC is disabled 00:05:18.201 EAL: No shared files mode enabled, IPC is disabled 00:05:18.201 EAL: No shared files mode enabled, IPC is disabled 00:05:18.201 00:05:18.201 real 0m1.492s 00:05:18.201 user 0m0.865s 00:05:18.201 sys 0m0.598s 00:05:18.201 02:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.201 02:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ************************************ 00:05:18.201 END TEST env_vtophys 00:05:18.201 ************************************ 00:05:18.201 02:53:13 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:18.201 02:53:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.201 02:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.201 02:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ************************************ 00:05:18.201 START TEST env_pci 00:05:18.201 ************************************ 00:05:18.201 02:53:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:18.201 00:05:18.201 00:05:18.201 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.201 http://cunit.sourceforge.net/ 00:05:18.201 00:05:18.201 00:05:18.201 Suite: pci 00:05:18.201 Test: pci_hook ...[2024-07-14 02:53:13.320388] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1877517 has claimed it 00:05:18.201 EAL: Cannot find device (10000:00:01.0) 00:05:18.201 EAL: Failed to attach device on primary process 00:05:18.201 passed 00:05:18.201 00:05:18.201 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.201 suites 1 1 n/a 0 0 00:05:18.201 tests 1 1 1 0 0 00:05:18.201 asserts 25 25 25 0 n/a 00:05:18.201 00:05:18.201 Elapsed time = 0.022 seconds 00:05:18.201 00:05:18.201 real 0m0.033s 00:05:18.201 user 0m0.008s 00:05:18.201 sys 0m0.025s 00:05:18.201 02:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.201 02:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ************************************ 00:05:18.201 END TEST env_pci 00:05:18.201 ************************************ 00:05:18.201 02:53:13 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:18.201 02:53:13 -- env/env.sh@15 -- # uname 00:05:18.201 02:53:13 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:18.201 02:53:13 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:18.201 02:53:13 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.201 02:53:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:18.201 02:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.201 02:53:13 -- common/autotest_common.sh@10 -- # set +x 00:05:18.201 ************************************ 00:05:18.201 START TEST env_dpdk_post_init 00:05:18.201 ************************************ 00:05:18.201 02:53:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:18.201 EAL: Detected CPU lcores: 48 00:05:18.201 EAL: Detected NUMA nodes: 2 00:05:18.201 EAL: Detected shared linkage of DPDK 00:05:18.201 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:18.201 EAL: Selected IOVA mode 'VA' 00:05:18.201 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.201 EAL: VFIO support initialized 00:05:18.201 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:18.457 EAL: Using IOMMU type 1 (Type 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:18.457 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:19.388 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:22.666 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:22.666 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:22.666 Starting DPDK initialization... 00:05:22.666 Starting SPDK post initialization... 00:05:22.666 SPDK NVMe probe 00:05:22.666 Attaching to 0000:88:00.0 00:05:22.666 Attached to 0000:88:00.0 00:05:22.666 Cleaning up... 00:05:22.666 00:05:22.666 real 0m4.387s 00:05:22.666 user 0m3.271s 00:05:22.666 sys 0m0.173s 00:05:22.666 02:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.666 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 ************************************ 00:05:22.666 END TEST env_dpdk_post_init 00:05:22.666 ************************************ 00:05:22.666 02:53:17 -- env/env.sh@26 -- # uname 00:05:22.666 02:53:17 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:22.666 02:53:17 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.666 02:53:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.666 02:53:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.666 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.666 ************************************ 00:05:22.666 START TEST env_mem_callbacks 00:05:22.666 ************************************ 00:05:22.667 02:53:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:22.667 EAL: Detected CPU lcores: 48 00:05:22.667 EAL: Detected NUMA nodes: 2 00:05:22.667 EAL: Detected shared linkage of DPDK 00:05:22.667 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.667 EAL: Selected IOVA mode 'VA' 00:05:22.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.667 EAL: VFIO support initialized 00:05:22.667 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.667 00:05:22.667 00:05:22.667 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.667 http://cunit.sourceforge.net/ 00:05:22.667 00:05:22.667 00:05:22.667 Suite: memory 00:05:22.667 Test: test ... 00:05:22.667 register 0x200000200000 2097152 00:05:22.667 malloc 3145728 00:05:22.667 register 0x200000400000 4194304 00:05:22.667 buf 0x200000500000 len 3145728 PASSED 00:05:22.667 malloc 64 00:05:22.667 buf 0x2000004fff40 len 64 PASSED 00:05:22.667 malloc 4194304 00:05:22.667 register 0x200000800000 6291456 00:05:22.667 buf 0x200000a00000 len 4194304 PASSED 00:05:22.667 free 0x200000500000 3145728 00:05:22.667 free 0x2000004fff40 64 00:05:22.667 unregister 0x200000400000 4194304 PASSED 00:05:22.667 free 0x200000a00000 4194304 00:05:22.667 unregister 0x200000800000 6291456 PASSED 00:05:22.667 malloc 8388608 00:05:22.667 register 0x200000400000 10485760 00:05:22.667 buf 0x200000600000 len 8388608 PASSED 00:05:22.667 free 0x200000600000 8388608 00:05:22.667 unregister 0x200000400000 10485760 PASSED 00:05:22.667 passed 00:05:22.667 00:05:22.667 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.667 suites 1 1 n/a 0 0 00:05:22.667 tests 1 1 1 0 0 00:05:22.667 asserts 15 15 15 0 n/a 00:05:22.667 00:05:22.667 Elapsed time = 0.005 seconds 00:05:22.667 00:05:22.667 real 0m0.046s 00:05:22.667 user 0m0.012s 00:05:22.667 sys 0m0.034s 00:05:22.667 02:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.667 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 END TEST env_mem_callbacks 00:05:22.667 ************************************ 00:05:22.667 00:05:22.667 real 0m6.286s 00:05:22.667 user 0m4.358s 00:05:22.667 sys 0m0.981s 00:05:22.667 02:53:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.667 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 END TEST env 00:05:22.667 ************************************ 00:05:22.667 02:53:17 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:22.667 02:53:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.667 02:53:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.667 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.667 ************************************ 00:05:22.667 START TEST rpc 00:05:22.667 ************************************ 00:05:22.667 02:53:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:22.667 * Looking for test storage... 00:05:22.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:22.667 02:53:17 -- rpc/rpc.sh@65 -- # spdk_pid=1878201 00:05:22.667 02:53:17 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:22.667 02:53:17 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.667 02:53:17 -- rpc/rpc.sh@67 -- # waitforlisten 1878201 00:05:22.925 02:53:17 -- common/autotest_common.sh@819 -- # '[' -z 1878201 ']' 00:05:22.926 02:53:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.926 02:53:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.926 02:53:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.926 02:53:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.926 02:53:17 -- common/autotest_common.sh@10 -- # set +x 00:05:22.926 [2024-07-14 02:53:17.967633] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:22.926 [2024-07-14 02:53:17.967709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878201 ] 00:05:22.926 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.926 [2024-07-14 02:53:18.024615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.926 [2024-07-14 02:53:18.106757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.926 [2024-07-14 02:53:18.106940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:22.926 [2024-07-14 02:53:18.106959] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1878201' to capture a snapshot of events at runtime. 00:05:22.926 [2024-07-14 02:53:18.106972] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1878201 for offline analysis/debug. 00:05:22.926 [2024-07-14 02:53:18.107012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.860 02:53:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.860 02:53:18 -- common/autotest_common.sh@852 -- # return 0 00:05:23.860 02:53:18 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.860 02:53:18 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:23.860 02:53:18 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:23.860 02:53:18 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:23.860 02:53:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.860 02:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.860 02:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.860 ************************************ 00:05:23.860 START TEST rpc_integrity 00:05:23.860 ************************************ 00:05:23.860 02:53:18 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:23.860 02:53:18 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:23.860 02:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.860 02:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.860 02:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.860 02:53:18 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:23.860 02:53:18 -- rpc/rpc.sh@13 -- # jq length 00:05:23.860 02:53:18 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:23.860 02:53:18 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:23.860 02:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.860 02:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.860 02:53:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.860 02:53:18 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:23.860 02:53:18 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:23.860 02:53:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.860 02:53:18 -- common/autotest_common.sh@10 -- # set +x 00:05:23.860 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.860 02:53:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:23.860 { 00:05:23.860 "name": "Malloc0", 00:05:23.860 "aliases": [ 00:05:23.860 "4ae7d1b8-10f7-4aa4-907d-f8b346ffb9e6" 00:05:23.860 ], 00:05:23.860 "product_name": "Malloc disk", 00:05:23.860 "block_size": 512, 00:05:23.860 "num_blocks": 16384, 00:05:23.860 "uuid": "4ae7d1b8-10f7-4aa4-907d-f8b346ffb9e6", 00:05:23.860 "assigned_rate_limits": { 00:05:23.860 "rw_ios_per_sec": 0, 00:05:23.860 "rw_mbytes_per_sec": 0, 00:05:23.860 "r_mbytes_per_sec": 0, 00:05:23.860 "w_mbytes_per_sec": 0 00:05:23.860 }, 00:05:23.861 "claimed": false, 00:05:23.861 "zoned": false, 00:05:23.861 "supported_io_types": { 00:05:23.861 "read": true, 00:05:23.861 "write": true, 00:05:23.861 "unmap": true, 00:05:23.861 "write_zeroes": true, 00:05:23.861 "flush": true, 00:05:23.861 "reset": true, 00:05:23.861 "compare": false, 00:05:23.861 "compare_and_write": false, 00:05:23.861 "abort": true, 00:05:23.861 "nvme_admin": false, 00:05:23.861 "nvme_io": false 00:05:23.861 }, 00:05:23.861 "memory_domains": [ 00:05:23.861 { 00:05:23.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.861 "dma_device_type": 2 00:05:23.861 } 00:05:23.861 ], 00:05:23.861 "driver_specific": {} 00:05:23.861 } 00:05:23.861 ]' 00:05:23.861 02:53:19 -- rpc/rpc.sh@17 -- # jq length 00:05:23.861 02:53:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:23.861 02:53:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:23.861 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.861 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.861 [2024-07-14 02:53:19.050744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:23.861 [2024-07-14 02:53:19.050793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:23.861 [2024-07-14 02:53:19.050817] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb255b0 00:05:23.861 [2024-07-14 02:53:19.050832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:23.861 [2024-07-14 02:53:19.052302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:23.861 [2024-07-14 02:53:19.052332] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:23.861 Passthru0 00:05:23.861 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.861 02:53:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:23.861 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.861 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:23.861 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:23.861 02:53:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:23.861 { 00:05:23.861 "name": "Malloc0", 00:05:23.861 "aliases": [ 00:05:23.861 "4ae7d1b8-10f7-4aa4-907d-f8b346ffb9e6" 00:05:23.861 ], 00:05:23.861 "product_name": "Malloc disk", 00:05:23.861 "block_size": 512, 00:05:23.861 "num_blocks": 16384, 00:05:23.861 "uuid": "4ae7d1b8-10f7-4aa4-907d-f8b346ffb9e6", 00:05:23.861 "assigned_rate_limits": { 00:05:23.861 "rw_ios_per_sec": 0, 00:05:23.861 "rw_mbytes_per_sec": 0, 00:05:23.861 "r_mbytes_per_sec": 0, 00:05:23.861 "w_mbytes_per_sec": 0 00:05:23.861 }, 00:05:23.861 "claimed": true, 00:05:23.861 "claim_type": "exclusive_write", 00:05:23.861 "zoned": false, 00:05:23.861 "supported_io_types": { 00:05:23.861 "read": true, 00:05:23.861 "write": true, 00:05:23.861 "unmap": true, 00:05:23.861 "write_zeroes": true, 00:05:23.861 "flush": true, 00:05:23.861 "reset": true, 00:05:23.861 "compare": false, 00:05:23.861 "compare_and_write": false, 00:05:23.861 "abort": true, 00:05:23.861 "nvme_admin": false, 00:05:23.861 "nvme_io": false 00:05:23.861 }, 00:05:23.861 "memory_domains": [ 00:05:23.861 { 00:05:23.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.861 "dma_device_type": 2 00:05:23.861 } 00:05:23.861 ], 00:05:23.861 "driver_specific": {} 00:05:23.861 }, 00:05:23.861 { 00:05:23.861 "name": "Passthru0", 00:05:23.861 "aliases": [ 00:05:23.861 "6dd69ef6-cc34-5a11-9bf1-9b2bdef141be" 00:05:23.861 ], 00:05:23.861 "product_name": "passthru", 00:05:23.861 "block_size": 512, 00:05:23.861 "num_blocks": 16384, 00:05:23.861 "uuid": "6dd69ef6-cc34-5a11-9bf1-9b2bdef141be", 00:05:23.861 "assigned_rate_limits": { 00:05:23.861 "rw_ios_per_sec": 0, 00:05:23.861 "rw_mbytes_per_sec": 0, 00:05:23.861 "r_mbytes_per_sec": 0, 00:05:23.861 "w_mbytes_per_sec": 0 00:05:23.861 }, 00:05:23.861 "claimed": false, 00:05:23.861 "zoned": false, 00:05:23.861 "supported_io_types": { 00:05:23.861 "read": true, 00:05:23.861 "write": true, 00:05:23.861 "unmap": true, 00:05:23.861 "write_zeroes": true, 00:05:23.861 "flush": true, 00:05:23.861 "reset": true, 00:05:23.861 "compare": false, 00:05:23.861 "compare_and_write": false, 00:05:23.861 "abort": true, 00:05:23.861 "nvme_admin": false, 00:05:23.861 "nvme_io": false 00:05:23.861 }, 00:05:23.861 "memory_domains": [ 00:05:23.861 { 00:05:23.861 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:23.861 "dma_device_type": 2 00:05:23.861 } 00:05:23.861 ], 00:05:23.861 "driver_specific": { 00:05:23.861 "passthru": { 00:05:23.861 "name": "Passthru0", 00:05:23.861 "base_bdev_name": "Malloc0" 00:05:23.861 } 00:05:23.861 } 00:05:23.861 } 00:05:23.861 ]' 00:05:23.861 02:53:19 -- rpc/rpc.sh@21 -- # jq length 00:05:23.861 02:53:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:23.861 02:53:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:23.861 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:23.861 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.119 02:53:19 -- rpc/rpc.sh@26 -- # jq length 00:05:24.119 02:53:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.119 00:05:24.119 real 0m0.230s 00:05:24.119 user 0m0.144s 00:05:24.119 sys 0m0.028s 00:05:24.119 02:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 ************************************ 00:05:24.119 END TEST rpc_integrity 00:05:24.119 ************************************ 00:05:24.119 02:53:19 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:24.119 02:53:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.119 02:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 ************************************ 00:05:24.119 START TEST rpc_plugins 00:05:24.119 ************************************ 00:05:24.119 02:53:19 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:24.119 02:53:19 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:24.119 02:53:19 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:24.119 { 00:05:24.119 "name": "Malloc1", 00:05:24.119 "aliases": [ 00:05:24.119 "9b1c50de-83d2-44bd-8c52-dced634aca75" 00:05:24.119 ], 00:05:24.119 "product_name": "Malloc disk", 00:05:24.119 "block_size": 4096, 00:05:24.119 "num_blocks": 256, 00:05:24.119 "uuid": "9b1c50de-83d2-44bd-8c52-dced634aca75", 00:05:24.119 "assigned_rate_limits": { 00:05:24.119 "rw_ios_per_sec": 0, 00:05:24.119 "rw_mbytes_per_sec": 0, 00:05:24.119 "r_mbytes_per_sec": 0, 00:05:24.119 "w_mbytes_per_sec": 0 00:05:24.119 }, 00:05:24.119 "claimed": false, 00:05:24.119 "zoned": false, 00:05:24.119 "supported_io_types": { 00:05:24.119 "read": true, 00:05:24.119 "write": true, 00:05:24.119 "unmap": true, 00:05:24.119 "write_zeroes": true, 00:05:24.119 "flush": true, 00:05:24.119 "reset": true, 00:05:24.119 "compare": false, 00:05:24.119 "compare_and_write": false, 00:05:24.119 "abort": true, 00:05:24.119 "nvme_admin": false, 00:05:24.119 "nvme_io": false 00:05:24.119 }, 00:05:24.119 "memory_domains": [ 00:05:24.119 { 00:05:24.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.119 "dma_device_type": 2 00:05:24.119 } 00:05:24.119 ], 00:05:24.119 "driver_specific": {} 00:05:24.119 } 00:05:24.119 ]' 00:05:24.119 02:53:19 -- rpc/rpc.sh@32 -- # jq length 00:05:24.119 02:53:19 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:24.119 02:53:19 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:24.119 02:53:19 -- rpc/rpc.sh@36 -- # jq length 00:05:24.119 02:53:19 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:24.119 00:05:24.119 real 0m0.118s 00:05:24.119 user 0m0.080s 00:05:24.119 sys 0m0.007s 00:05:24.119 02:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 ************************************ 00:05:24.119 END TEST rpc_plugins 00:05:24.119 ************************************ 00:05:24.119 02:53:19 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:24.119 02:53:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.119 02:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 ************************************ 00:05:24.119 START TEST rpc_trace_cmd_test 00:05:24.119 ************************************ 00:05:24.119 02:53:19 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:24.119 02:53:19 -- rpc/rpc.sh@40 -- # local info 00:05:24.119 02:53:19 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:24.119 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.119 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.119 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.119 02:53:19 -- rpc/rpc.sh@42 -- # info='{ 00:05:24.120 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1878201", 00:05:24.120 "tpoint_group_mask": "0x8", 00:05:24.120 "iscsi_conn": { 00:05:24.120 "mask": "0x2", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "scsi": { 00:05:24.120 "mask": "0x4", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "bdev": { 00:05:24.120 "mask": "0x8", 00:05:24.120 "tpoint_mask": "0xffffffffffffffff" 00:05:24.120 }, 00:05:24.120 "nvmf_rdma": { 00:05:24.120 "mask": "0x10", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "nvmf_tcp": { 00:05:24.120 "mask": "0x20", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "ftl": { 00:05:24.120 "mask": "0x40", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "blobfs": { 00:05:24.120 "mask": "0x80", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "dsa": { 00:05:24.120 "mask": "0x200", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "thread": { 00:05:24.120 "mask": "0x400", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "nvme_pcie": { 00:05:24.120 "mask": "0x800", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "iaa": { 00:05:24.120 "mask": "0x1000", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "nvme_tcp": { 00:05:24.120 "mask": "0x2000", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 }, 00:05:24.120 "bdev_nvme": { 00:05:24.120 "mask": "0x4000", 00:05:24.120 "tpoint_mask": "0x0" 00:05:24.120 } 00:05:24.120 }' 00:05:24.120 02:53:19 -- rpc/rpc.sh@43 -- # jq length 00:05:24.378 02:53:19 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:24.378 02:53:19 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:24.378 02:53:19 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:24.378 02:53:19 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:24.378 02:53:19 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:24.378 02:53:19 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:24.378 02:53:19 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:24.378 02:53:19 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:24.378 02:53:19 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:24.378 00:05:24.378 real 0m0.194s 00:05:24.378 user 0m0.175s 00:05:24.378 sys 0m0.013s 00:05:24.378 02:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.378 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.378 ************************************ 00:05:24.378 END TEST rpc_trace_cmd_test 00:05:24.378 ************************************ 00:05:24.378 02:53:19 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:24.378 02:53:19 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:24.378 02:53:19 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:24.378 02:53:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.378 02:53:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.378 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.378 ************************************ 00:05:24.378 START TEST rpc_daemon_integrity 00:05:24.378 ************************************ 00:05:24.378 02:53:19 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:24.378 02:53:19 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:24.378 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.378 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.378 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.378 02:53:19 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:24.378 02:53:19 -- rpc/rpc.sh@13 -- # jq length 00:05:24.378 02:53:19 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:24.378 02:53:19 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:24.378 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.378 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.378 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.378 02:53:19 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:24.378 02:53:19 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:24.378 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.378 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.378 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.378 02:53:19 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:24.378 { 00:05:24.378 "name": "Malloc2", 00:05:24.378 "aliases": [ 00:05:24.378 "065f019d-d15a-41fa-aaad-e3d20b6e0a04" 00:05:24.378 ], 00:05:24.378 "product_name": "Malloc disk", 00:05:24.378 "block_size": 512, 00:05:24.378 "num_blocks": 16384, 00:05:24.378 "uuid": "065f019d-d15a-41fa-aaad-e3d20b6e0a04", 00:05:24.378 "assigned_rate_limits": { 00:05:24.378 "rw_ios_per_sec": 0, 00:05:24.378 "rw_mbytes_per_sec": 0, 00:05:24.378 "r_mbytes_per_sec": 0, 00:05:24.378 "w_mbytes_per_sec": 0 00:05:24.378 }, 00:05:24.378 "claimed": false, 00:05:24.378 "zoned": false, 00:05:24.378 "supported_io_types": { 00:05:24.378 "read": true, 00:05:24.378 "write": true, 00:05:24.378 "unmap": true, 00:05:24.378 "write_zeroes": true, 00:05:24.378 "flush": true, 00:05:24.378 "reset": true, 00:05:24.378 "compare": false, 00:05:24.378 "compare_and_write": false, 00:05:24.378 "abort": true, 00:05:24.378 "nvme_admin": false, 00:05:24.378 "nvme_io": false 00:05:24.378 }, 00:05:24.378 "memory_domains": [ 00:05:24.378 { 00:05:24.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.378 "dma_device_type": 2 00:05:24.378 } 00:05:24.378 ], 00:05:24.378 "driver_specific": {} 00:05:24.378 } 00:05:24.378 ]' 00:05:24.378 02:53:19 -- rpc/rpc.sh@17 -- # jq length 00:05:24.638 02:53:19 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:24.638 02:53:19 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:24.638 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 [2024-07-14 02:53:19.668538] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:24.638 [2024-07-14 02:53:19.668584] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:24.638 [2024-07-14 02:53:19.668608] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbe66f0 00:05:24.638 [2024-07-14 02:53:19.668624] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:24.638 [2024-07-14 02:53:19.669940] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:24.638 [2024-07-14 02:53:19.669966] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:24.638 Passthru0 00:05:24.638 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.638 02:53:19 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:24.638 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.638 02:53:19 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:24.638 { 00:05:24.638 "name": "Malloc2", 00:05:24.638 "aliases": [ 00:05:24.638 "065f019d-d15a-41fa-aaad-e3d20b6e0a04" 00:05:24.638 ], 00:05:24.638 "product_name": "Malloc disk", 00:05:24.638 "block_size": 512, 00:05:24.638 "num_blocks": 16384, 00:05:24.638 "uuid": "065f019d-d15a-41fa-aaad-e3d20b6e0a04", 00:05:24.638 "assigned_rate_limits": { 00:05:24.638 "rw_ios_per_sec": 0, 00:05:24.638 "rw_mbytes_per_sec": 0, 00:05:24.638 "r_mbytes_per_sec": 0, 00:05:24.638 "w_mbytes_per_sec": 0 00:05:24.638 }, 00:05:24.638 "claimed": true, 00:05:24.638 "claim_type": "exclusive_write", 00:05:24.638 "zoned": false, 00:05:24.638 "supported_io_types": { 00:05:24.638 "read": true, 00:05:24.638 "write": true, 00:05:24.638 "unmap": true, 00:05:24.638 "write_zeroes": true, 00:05:24.638 "flush": true, 00:05:24.638 "reset": true, 00:05:24.638 "compare": false, 00:05:24.638 "compare_and_write": false, 00:05:24.638 "abort": true, 00:05:24.638 "nvme_admin": false, 00:05:24.638 "nvme_io": false 00:05:24.638 }, 00:05:24.638 "memory_domains": [ 00:05:24.638 { 00:05:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.638 "dma_device_type": 2 00:05:24.638 } 00:05:24.638 ], 00:05:24.638 "driver_specific": {} 00:05:24.638 }, 00:05:24.638 { 00:05:24.638 "name": "Passthru0", 00:05:24.638 "aliases": [ 00:05:24.638 "776de6b0-d032-5ce5-8cc2-0c0fa602b9ff" 00:05:24.638 ], 00:05:24.638 "product_name": "passthru", 00:05:24.638 "block_size": 512, 00:05:24.638 "num_blocks": 16384, 00:05:24.638 "uuid": "776de6b0-d032-5ce5-8cc2-0c0fa602b9ff", 00:05:24.638 "assigned_rate_limits": { 00:05:24.638 "rw_ios_per_sec": 0, 00:05:24.638 "rw_mbytes_per_sec": 0, 00:05:24.638 "r_mbytes_per_sec": 0, 00:05:24.638 "w_mbytes_per_sec": 0 00:05:24.638 }, 00:05:24.638 "claimed": false, 00:05:24.638 "zoned": false, 00:05:24.638 "supported_io_types": { 00:05:24.638 "read": true, 00:05:24.638 "write": true, 00:05:24.638 "unmap": true, 00:05:24.638 "write_zeroes": true, 00:05:24.638 "flush": true, 00:05:24.638 "reset": true, 00:05:24.638 "compare": false, 00:05:24.638 "compare_and_write": false, 00:05:24.638 "abort": true, 00:05:24.638 "nvme_admin": false, 00:05:24.638 "nvme_io": false 00:05:24.638 }, 00:05:24.638 "memory_domains": [ 00:05:24.638 { 00:05:24.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:24.638 "dma_device_type": 2 00:05:24.638 } 00:05:24.638 ], 00:05:24.638 "driver_specific": { 00:05:24.638 "passthru": { 00:05:24.638 "name": "Passthru0", 00:05:24.638 "base_bdev_name": "Malloc2" 00:05:24.638 } 00:05:24.638 } 00:05:24.638 } 00:05:24.638 ]' 00:05:24.638 02:53:19 -- rpc/rpc.sh@21 -- # jq length 00:05:24.638 02:53:19 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:24.638 02:53:19 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:24.638 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.638 02:53:19 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:24.638 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.638 02:53:19 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:24.638 02:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 02:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:24.638 02:53:19 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:24.638 02:53:19 -- rpc/rpc.sh@26 -- # jq length 00:05:24.638 02:53:19 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:24.638 00:05:24.638 real 0m0.222s 00:05:24.638 user 0m0.149s 00:05:24.638 sys 0m0.018s 00:05:24.638 02:53:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.638 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:05:24.638 ************************************ 00:05:24.638 END TEST rpc_daemon_integrity 00:05:24.638 ************************************ 00:05:24.638 02:53:19 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:24.638 02:53:19 -- rpc/rpc.sh@84 -- # killprocess 1878201 00:05:24.638 02:53:19 -- common/autotest_common.sh@926 -- # '[' -z 1878201 ']' 00:05:24.638 02:53:19 -- common/autotest_common.sh@930 -- # kill -0 1878201 00:05:24.638 02:53:19 -- common/autotest_common.sh@931 -- # uname 00:05:24.638 02:53:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:24.638 02:53:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1878201 00:05:24.638 02:53:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:24.638 02:53:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:24.638 02:53:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1878201' 00:05:24.638 killing process with pid 1878201 00:05:24.638 02:53:19 -- common/autotest_common.sh@945 -- # kill 1878201 00:05:24.638 02:53:19 -- common/autotest_common.sh@950 -- # wait 1878201 00:05:25.245 00:05:25.245 real 0m2.376s 00:05:25.245 user 0m3.070s 00:05:25.245 sys 0m0.564s 00:05:25.245 02:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.245 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.245 ************************************ 00:05:25.245 END TEST rpc 00:05:25.245 ************************************ 00:05:25.245 02:53:20 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.245 02:53:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.245 02:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.245 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.245 ************************************ 00:05:25.245 START TEST rpc_client 00:05:25.245 ************************************ 00:05:25.245 02:53:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:25.245 * Looking for test storage... 00:05:25.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:25.245 02:53:20 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:25.245 OK 00:05:25.245 02:53:20 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:25.245 00:05:25.245 real 0m0.065s 00:05:25.245 user 0m0.032s 00:05:25.245 sys 0m0.039s 00:05:25.245 02:53:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.245 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.245 ************************************ 00:05:25.245 END TEST rpc_client 00:05:25.245 ************************************ 00:05:25.245 02:53:20 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.245 02:53:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.245 02:53:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.245 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.245 ************************************ 00:05:25.245 START TEST json_config 00:05:25.245 ************************************ 00:05:25.245 02:53:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:25.245 02:53:20 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:25.245 02:53:20 -- nvmf/common.sh@7 -- # uname -s 00:05:25.245 02:53:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:25.245 02:53:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:25.245 02:53:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:25.245 02:53:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:25.245 02:53:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:25.245 02:53:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:25.245 02:53:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:25.245 02:53:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:25.245 02:53:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:25.245 02:53:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:25.245 02:53:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.245 02:53:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:25.245 02:53:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:25.245 02:53:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:25.245 02:53:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:25.245 02:53:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:25.245 02:53:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:25.245 02:53:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:25.245 02:53:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:25.245 02:53:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.245 02:53:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.245 02:53:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.245 02:53:20 -- paths/export.sh@5 -- # export PATH 00:05:25.245 02:53:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:25.245 02:53:20 -- nvmf/common.sh@46 -- # : 0 00:05:25.245 02:53:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:25.245 02:53:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:25.245 02:53:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:25.245 02:53:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:25.245 02:53:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:25.245 02:53:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:25.245 02:53:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:25.245 02:53:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:25.245 02:53:20 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:25.245 02:53:20 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:25.245 02:53:20 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:25.245 02:53:20 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:25.245 02:53:20 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:25.245 02:53:20 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:25.245 02:53:20 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:25.245 02:53:20 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:25.245 02:53:20 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:25.245 02:53:20 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:25.246 02:53:20 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:25.246 02:53:20 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:25.246 02:53:20 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:25.246 02:53:20 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:25.246 02:53:20 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:25.246 INFO: JSON configuration test init 00:05:25.246 02:53:20 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:25.246 02:53:20 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:25.246 02:53:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.246 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.246 02:53:20 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:25.246 02:53:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:25.246 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.246 02:53:20 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:25.246 02:53:20 -- json_config/json_config.sh@98 -- # local app=target 00:05:25.246 02:53:20 -- json_config/json_config.sh@99 -- # shift 00:05:25.246 02:53:20 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:25.246 02:53:20 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:25.246 02:53:20 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:25.246 02:53:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.246 02:53:20 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:25.246 02:53:20 -- json_config/json_config.sh@111 -- # app_pid[$app]=1878683 00:05:25.246 02:53:20 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:25.246 02:53:20 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:25.246 Waiting for target to run... 00:05:25.246 02:53:20 -- json_config/json_config.sh@114 -- # waitforlisten 1878683 /var/tmp/spdk_tgt.sock 00:05:25.246 02:53:20 -- common/autotest_common.sh@819 -- # '[' -z 1878683 ']' 00:05:25.246 02:53:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:25.246 02:53:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.246 02:53:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:25.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:25.246 02:53:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.246 02:53:20 -- common/autotest_common.sh@10 -- # set +x 00:05:25.246 [2024-07-14 02:53:20.468061] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:25.246 [2024-07-14 02:53:20.468148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1878683 ] 00:05:25.246 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.813 [2024-07-14 02:53:20.809645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.813 [2024-07-14 02:53:20.871080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:25.813 [2024-07-14 02:53:20.871274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.379 02:53:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.379 02:53:21 -- common/autotest_common.sh@852 -- # return 0 00:05:26.379 02:53:21 -- json_config/json_config.sh@115 -- # echo '' 00:05:26.379 00:05:26.379 02:53:21 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:26.379 02:53:21 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:26.379 02:53:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:26.379 02:53:21 -- common/autotest_common.sh@10 -- # set +x 00:05:26.379 02:53:21 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:26.379 02:53:21 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:26.379 02:53:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:26.379 02:53:21 -- common/autotest_common.sh@10 -- # set +x 00:05:26.379 02:53:21 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:26.379 02:53:21 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:26.379 02:53:21 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:29.660 02:53:24 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:29.660 02:53:24 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:29.660 02:53:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.661 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.661 02:53:24 -- json_config/json_config.sh@48 -- # local ret=0 00:05:29.661 02:53:24 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:29.661 02:53:24 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:29.661 02:53:24 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:29.661 02:53:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:29.661 02:53:24 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:29.661 02:53:24 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:29.661 02:53:24 -- json_config/json_config.sh@51 -- # local get_types 00:05:29.661 02:53:24 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:29.661 02:53:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.661 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.661 02:53:24 -- json_config/json_config.sh@58 -- # return 0 00:05:29.661 02:53:24 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:29.661 02:53:24 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:29.661 02:53:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:29.661 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:05:29.661 02:53:24 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:29.661 02:53:24 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:29.661 02:53:24 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.661 02:53:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:29.919 MallocForNvmf0 00:05:29.919 02:53:25 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.919 02:53:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:30.177 MallocForNvmf1 00:05:30.177 02:53:25 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.177 02:53:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:30.435 [2024-07-14 02:53:25.509275] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:30.435 02:53:25 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.435 02:53:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:30.692 02:53:25 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.692 02:53:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:30.950 02:53:26 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:30.950 02:53:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:31.209 02:53:26 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.209 02:53:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:31.209 [2024-07-14 02:53:26.440384] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:31.209 02:53:26 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:31.209 02:53:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.209 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.466 02:53:26 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:31.466 02:53:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.466 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.466 02:53:26 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:31.466 02:53:26 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.466 02:53:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:31.466 MallocBdevForConfigChangeCheck 00:05:31.725 02:53:26 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:31.725 02:53:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:31.725 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:05:31.725 02:53:26 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:31.725 02:53:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:31.983 02:53:27 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:31.983 INFO: shutting down applications... 00:05:31.983 02:53:27 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:31.983 02:53:27 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:31.983 02:53:27 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:31.983 02:53:27 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:33.883 Calling clear_iscsi_subsystem 00:05:33.883 Calling clear_nvmf_subsystem 00:05:33.883 Calling clear_nbd_subsystem 00:05:33.883 Calling clear_ublk_subsystem 00:05:33.883 Calling clear_vhost_blk_subsystem 00:05:33.883 Calling clear_vhost_scsi_subsystem 00:05:33.883 Calling clear_scheduler_subsystem 00:05:33.883 Calling clear_bdev_subsystem 00:05:33.883 Calling clear_accel_subsystem 00:05:33.883 Calling clear_vmd_subsystem 00:05:33.883 Calling clear_sock_subsystem 00:05:33.883 Calling clear_iobuf_subsystem 00:05:33.883 02:53:28 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:33.883 02:53:28 -- json_config/json_config.sh@396 -- # count=100 00:05:33.883 02:53:28 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:33.883 02:53:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:33.883 02:53:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:33.883 02:53:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:34.143 02:53:29 -- json_config/json_config.sh@398 -- # break 00:05:34.143 02:53:29 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:34.143 02:53:29 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:34.143 02:53:29 -- json_config/json_config.sh@120 -- # local app=target 00:05:34.143 02:53:29 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:34.143 02:53:29 -- json_config/json_config.sh@124 -- # [[ -n 1878683 ]] 00:05:34.143 02:53:29 -- json_config/json_config.sh@127 -- # kill -SIGINT 1878683 00:05:34.143 02:53:29 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:34.143 02:53:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:34.143 02:53:29 -- json_config/json_config.sh@130 -- # kill -0 1878683 00:05:34.143 02:53:29 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:34.402 02:53:29 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:34.402 02:53:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:34.402 02:53:29 -- json_config/json_config.sh@130 -- # kill -0 1878683 00:05:34.402 02:53:29 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:34.402 02:53:29 -- json_config/json_config.sh@132 -- # break 00:05:34.402 02:53:29 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:34.402 02:53:29 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:34.402 SPDK target shutdown done 00:05:34.402 02:53:29 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:34.402 INFO: relaunching applications... 00:05:34.402 02:53:29 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.402 02:53:29 -- json_config/json_config.sh@98 -- # local app=target 00:05:34.402 02:53:29 -- json_config/json_config.sh@99 -- # shift 00:05:34.402 02:53:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:34.402 02:53:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:34.402 02:53:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:34.402 02:53:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.402 02:53:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:34.661 02:53:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=1879909 00:05:34.661 02:53:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:34.661 02:53:29 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:34.661 Waiting for target to run... 00:05:34.661 02:53:29 -- json_config/json_config.sh@114 -- # waitforlisten 1879909 /var/tmp/spdk_tgt.sock 00:05:34.661 02:53:29 -- common/autotest_common.sh@819 -- # '[' -z 1879909 ']' 00:05:34.661 02:53:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:34.661 02:53:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.661 02:53:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:34.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:34.661 02:53:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.661 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:05:34.661 [2024-07-14 02:53:29.703841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:34.661 [2024-07-14 02:53:29.703946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1879909 ] 00:05:34.661 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.920 [2024-07-14 02:53:30.068760] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.920 [2024-07-14 02:53:30.129145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.920 [2024-07-14 02:53:30.129312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.203 [2024-07-14 02:53:33.146741] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:38.203 [2024-07-14 02:53:33.179074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:38.460 02:53:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:38.460 02:53:33 -- common/autotest_common.sh@852 -- # return 0 00:05:38.460 02:53:33 -- json_config/json_config.sh@115 -- # echo '' 00:05:38.460 00:05:38.460 02:53:33 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:38.460 02:53:33 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:38.460 INFO: Checking if target configuration is the same... 00:05:38.460 02:53:33 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.460 02:53:33 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:38.460 02:53:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.460 + '[' 2 -ne 2 ']' 00:05:38.460 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.460 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.460 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.460 +++ basename /dev/fd/62 00:05:38.460 ++ mktemp /tmp/62.XXX 00:05:38.460 + tmp_file_1=/tmp/62.lC2 00:05:38.460 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.460 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.460 + tmp_file_2=/tmp/spdk_tgt_config.json.JZK 00:05:38.460 + ret=0 00:05:38.460 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.718 + diff -u /tmp/62.lC2 /tmp/spdk_tgt_config.json.JZK 00:05:38.718 + echo 'INFO: JSON config files are the same' 00:05:38.718 INFO: JSON config files are the same 00:05:38.718 + rm /tmp/62.lC2 /tmp/spdk_tgt_config.json.JZK 00:05:38.718 + exit 0 00:05:38.718 02:53:33 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:38.718 02:53:33 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.718 INFO: changing configuration and checking if this can be detected... 00:05:38.718 02:53:33 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.718 02:53:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.976 02:53:34 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.976 02:53:34 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:38.976 02:53:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.976 + '[' 2 -ne 2 ']' 00:05:38.976 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.976 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.976 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.976 +++ basename /dev/fd/62 00:05:38.976 ++ mktemp /tmp/62.XXX 00:05:38.976 + tmp_file_1=/tmp/62.B3I 00:05:38.976 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.976 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.976 + tmp_file_2=/tmp/spdk_tgt_config.json.dB9 00:05:38.976 + ret=0 00:05:38.976 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.543 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:39.543 + diff -u /tmp/62.B3I /tmp/spdk_tgt_config.json.dB9 00:05:39.543 + ret=1 00:05:39.543 + echo '=== Start of file: /tmp/62.B3I ===' 00:05:39.543 + cat /tmp/62.B3I 00:05:39.543 + echo '=== End of file: /tmp/62.B3I ===' 00:05:39.543 + echo '' 00:05:39.543 + echo '=== Start of file: /tmp/spdk_tgt_config.json.dB9 ===' 00:05:39.543 + cat /tmp/spdk_tgt_config.json.dB9 00:05:39.543 + echo '=== End of file: /tmp/spdk_tgt_config.json.dB9 ===' 00:05:39.543 + echo '' 00:05:39.543 + rm /tmp/62.B3I /tmp/spdk_tgt_config.json.dB9 00:05:39.543 + exit 1 00:05:39.543 02:53:34 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:39.543 INFO: configuration change detected. 00:05:39.543 02:53:34 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:39.543 02:53:34 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:39.543 02:53:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.543 02:53:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.543 02:53:34 -- json_config/json_config.sh@360 -- # local ret=0 00:05:39.543 02:53:34 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:39.543 02:53:34 -- json_config/json_config.sh@370 -- # [[ -n 1879909 ]] 00:05:39.543 02:53:34 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:39.543 02:53:34 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:39.543 02:53:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.543 02:53:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.543 02:53:34 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:39.543 02:53:34 -- json_config/json_config.sh@246 -- # uname -s 00:05:39.543 02:53:34 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:39.543 02:53:34 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:39.543 02:53:34 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:39.543 02:53:34 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:39.543 02:53:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:39.543 02:53:34 -- common/autotest_common.sh@10 -- # set +x 00:05:39.543 02:53:34 -- json_config/json_config.sh@376 -- # killprocess 1879909 00:05:39.543 02:53:34 -- common/autotest_common.sh@926 -- # '[' -z 1879909 ']' 00:05:39.543 02:53:34 -- common/autotest_common.sh@930 -- # kill -0 1879909 00:05:39.543 02:53:34 -- common/autotest_common.sh@931 -- # uname 00:05:39.543 02:53:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:39.543 02:53:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1879909 00:05:39.543 02:53:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:39.543 02:53:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:39.543 02:53:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1879909' 00:05:39.543 killing process with pid 1879909 00:05:39.543 02:53:34 -- common/autotest_common.sh@945 -- # kill 1879909 00:05:39.543 02:53:34 -- common/autotest_common.sh@950 -- # wait 1879909 00:05:41.440 02:53:36 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:41.440 02:53:36 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:41.440 02:53:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:41.440 02:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:41.440 02:53:36 -- json_config/json_config.sh@381 -- # return 0 00:05:41.440 02:53:36 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:41.440 INFO: Success 00:05:41.440 00:05:41.440 real 0m15.895s 00:05:41.440 user 0m18.179s 00:05:41.440 sys 0m1.913s 00:05:41.440 02:53:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.440 02:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:41.440 ************************************ 00:05:41.440 END TEST json_config 00:05:41.440 ************************************ 00:05:41.440 02:53:36 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.440 02:53:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.440 02:53:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.440 02:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:41.440 ************************************ 00:05:41.440 START TEST json_config_extra_key 00:05:41.440 ************************************ 00:05:41.440 02:53:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:41.440 02:53:36 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.440 02:53:36 -- nvmf/common.sh@7 -- # uname -s 00:05:41.440 02:53:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.440 02:53:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.440 02:53:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.440 02:53:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.440 02:53:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:41.440 02:53:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:41.440 02:53:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.440 02:53:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:41.440 02:53:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.440 02:53:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:41.441 02:53:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.441 02:53:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:41.441 02:53:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.441 02:53:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:41.441 02:53:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:41.441 02:53:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.441 02:53:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.441 02:53:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.441 02:53:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.441 02:53:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.441 02:53:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.441 02:53:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.441 02:53:36 -- paths/export.sh@5 -- # export PATH 00:05:41.441 02:53:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.441 02:53:36 -- nvmf/common.sh@46 -- # : 0 00:05:41.441 02:53:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:41.441 02:53:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:41.441 02:53:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:41.441 02:53:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.441 02:53:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.441 02:53:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:41.441 02:53:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:41.441 02:53:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:41.441 INFO: launching applications... 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1880864 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:41.441 Waiting for target to run... 00:05:41.441 02:53:36 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1880864 /var/tmp/spdk_tgt.sock 00:05:41.441 02:53:36 -- common/autotest_common.sh@819 -- # '[' -z 1880864 ']' 00:05:41.441 02:53:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:41.441 02:53:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.441 02:53:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:41.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:41.441 02:53:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.441 02:53:36 -- common/autotest_common.sh@10 -- # set +x 00:05:41.441 [2024-07-14 02:53:36.386398] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:41.441 [2024-07-14 02:53:36.386504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1880864 ] 00:05:41.441 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.699 [2024-07-14 02:53:36.885716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.956 [2024-07-14 02:53:36.963562] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.957 [2024-07-14 02:53:36.963729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.221 02:53:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.221 02:53:37 -- common/autotest_common.sh@852 -- # return 0 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:42.221 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:42.221 INFO: shutting down applications... 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1880864 ]] 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1880864 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1880864 00:05:42.221 02:53:37 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1880864 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:42.831 SPDK target shutdown done 00:05:42.831 02:53:37 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:42.831 Success 00:05:42.831 00:05:42.831 real 0m1.522s 00:05:42.831 user 0m1.329s 00:05:42.831 sys 0m0.588s 00:05:42.831 02:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.831 02:53:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 ************************************ 00:05:42.831 END TEST json_config_extra_key 00:05:42.831 ************************************ 00:05:42.831 02:53:37 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.831 02:53:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.831 02:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.831 02:53:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 ************************************ 00:05:42.831 START TEST alias_rpc 00:05:42.831 ************************************ 00:05:42.831 02:53:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:42.831 * Looking for test storage... 00:05:42.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:42.831 02:53:37 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.831 02:53:37 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1881050 00:05:42.831 02:53:37 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.831 02:53:37 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1881050 00:05:42.831 02:53:37 -- common/autotest_common.sh@819 -- # '[' -z 1881050 ']' 00:05:42.831 02:53:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.831 02:53:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:42.831 02:53:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.831 02:53:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:42.831 02:53:37 -- common/autotest_common.sh@10 -- # set +x 00:05:42.831 [2024-07-14 02:53:37.931292] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:42.831 [2024-07-14 02:53:37.931390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881050 ] 00:05:42.831 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.831 [2024-07-14 02:53:37.990064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.831 [2024-07-14 02:53:38.074648] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:42.831 [2024-07-14 02:53:38.074813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.764 02:53:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:43.764 02:53:38 -- common/autotest_common.sh@852 -- # return 0 00:05:43.764 02:53:38 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:44.022 02:53:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1881050 00:05:44.022 02:53:39 -- common/autotest_common.sh@926 -- # '[' -z 1881050 ']' 00:05:44.022 02:53:39 -- common/autotest_common.sh@930 -- # kill -0 1881050 00:05:44.022 02:53:39 -- common/autotest_common.sh@931 -- # uname 00:05:44.022 02:53:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:44.022 02:53:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1881050 00:05:44.022 02:53:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:44.022 02:53:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:44.022 02:53:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1881050' 00:05:44.022 killing process with pid 1881050 00:05:44.022 02:53:39 -- common/autotest_common.sh@945 -- # kill 1881050 00:05:44.022 02:53:39 -- common/autotest_common.sh@950 -- # wait 1881050 00:05:44.280 00:05:44.280 real 0m1.690s 00:05:44.280 user 0m1.940s 00:05:44.280 sys 0m0.444s 00:05:44.280 02:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.280 02:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.280 ************************************ 00:05:44.280 END TEST alias_rpc 00:05:44.280 ************************************ 00:05:44.538 02:53:39 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:44.538 02:53:39 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.538 02:53:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.538 02:53:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.538 02:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 ************************************ 00:05:44.538 START TEST spdkcli_tcp 00:05:44.538 ************************************ 00:05:44.538 02:53:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:44.538 * Looking for test storage... 00:05:44.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:44.538 02:53:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:44.538 02:53:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:44.538 02:53:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.538 02:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1881373 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:44.538 02:53:39 -- spdkcli/tcp.sh@27 -- # waitforlisten 1881373 00:05:44.538 02:53:39 -- common/autotest_common.sh@819 -- # '[' -z 1881373 ']' 00:05:44.538 02:53:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.538 02:53:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.538 02:53:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.538 02:53:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.538 02:53:39 -- common/autotest_common.sh@10 -- # set +x 00:05:44.538 [2024-07-14 02:53:39.652830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:44.538 [2024-07-14 02:53:39.652928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881373 ] 00:05:44.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.538 [2024-07-14 02:53:39.709653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.796 [2024-07-14 02:53:39.792913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.796 [2024-07-14 02:53:39.793119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.796 [2024-07-14 02:53:39.793124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.361 02:53:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.361 02:53:40 -- common/autotest_common.sh@852 -- # return 0 00:05:45.361 02:53:40 -- spdkcli/tcp.sh@31 -- # socat_pid=1881516 00:05:45.361 02:53:40 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:45.361 02:53:40 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:45.619 [ 00:05:45.619 "bdev_malloc_delete", 00:05:45.619 "bdev_malloc_create", 00:05:45.619 "bdev_null_resize", 00:05:45.619 "bdev_null_delete", 00:05:45.619 "bdev_null_create", 00:05:45.619 "bdev_nvme_cuse_unregister", 00:05:45.619 "bdev_nvme_cuse_register", 00:05:45.619 "bdev_opal_new_user", 00:05:45.619 "bdev_opal_set_lock_state", 00:05:45.619 "bdev_opal_delete", 00:05:45.619 "bdev_opal_get_info", 00:05:45.619 "bdev_opal_create", 00:05:45.619 "bdev_nvme_opal_revert", 00:05:45.620 "bdev_nvme_opal_init", 00:05:45.620 "bdev_nvme_send_cmd", 00:05:45.620 "bdev_nvme_get_path_iostat", 00:05:45.620 "bdev_nvme_get_mdns_discovery_info", 00:05:45.620 "bdev_nvme_stop_mdns_discovery", 00:05:45.620 "bdev_nvme_start_mdns_discovery", 00:05:45.620 "bdev_nvme_set_multipath_policy", 00:05:45.620 "bdev_nvme_set_preferred_path", 00:05:45.620 "bdev_nvme_get_io_paths", 00:05:45.620 "bdev_nvme_remove_error_injection", 00:05:45.620 "bdev_nvme_add_error_injection", 00:05:45.620 "bdev_nvme_get_discovery_info", 00:05:45.620 "bdev_nvme_stop_discovery", 00:05:45.620 "bdev_nvme_start_discovery", 00:05:45.620 "bdev_nvme_get_controller_health_info", 00:05:45.620 "bdev_nvme_disable_controller", 00:05:45.620 "bdev_nvme_enable_controller", 00:05:45.620 "bdev_nvme_reset_controller", 00:05:45.620 "bdev_nvme_get_transport_statistics", 00:05:45.620 "bdev_nvme_apply_firmware", 00:05:45.620 "bdev_nvme_detach_controller", 00:05:45.620 "bdev_nvme_get_controllers", 00:05:45.620 "bdev_nvme_attach_controller", 00:05:45.620 "bdev_nvme_set_hotplug", 00:05:45.620 "bdev_nvme_set_options", 00:05:45.620 "bdev_passthru_delete", 00:05:45.620 "bdev_passthru_create", 00:05:45.620 "bdev_lvol_grow_lvstore", 00:05:45.620 "bdev_lvol_get_lvols", 00:05:45.620 "bdev_lvol_get_lvstores", 00:05:45.620 "bdev_lvol_delete", 00:05:45.620 "bdev_lvol_set_read_only", 00:05:45.620 "bdev_lvol_resize", 00:05:45.620 "bdev_lvol_decouple_parent", 00:05:45.620 "bdev_lvol_inflate", 00:05:45.620 "bdev_lvol_rename", 00:05:45.620 "bdev_lvol_clone_bdev", 00:05:45.620 "bdev_lvol_clone", 00:05:45.620 "bdev_lvol_snapshot", 00:05:45.620 "bdev_lvol_create", 00:05:45.620 "bdev_lvol_delete_lvstore", 00:05:45.620 "bdev_lvol_rename_lvstore", 00:05:45.620 "bdev_lvol_create_lvstore", 00:05:45.620 "bdev_raid_set_options", 00:05:45.620 "bdev_raid_remove_base_bdev", 00:05:45.620 "bdev_raid_add_base_bdev", 00:05:45.620 "bdev_raid_delete", 00:05:45.620 "bdev_raid_create", 00:05:45.620 "bdev_raid_get_bdevs", 00:05:45.620 "bdev_error_inject_error", 00:05:45.620 "bdev_error_delete", 00:05:45.620 "bdev_error_create", 00:05:45.620 "bdev_split_delete", 00:05:45.620 "bdev_split_create", 00:05:45.620 "bdev_delay_delete", 00:05:45.620 "bdev_delay_create", 00:05:45.620 "bdev_delay_update_latency", 00:05:45.620 "bdev_zone_block_delete", 00:05:45.620 "bdev_zone_block_create", 00:05:45.620 "blobfs_create", 00:05:45.620 "blobfs_detect", 00:05:45.620 "blobfs_set_cache_size", 00:05:45.620 "bdev_aio_delete", 00:05:45.620 "bdev_aio_rescan", 00:05:45.620 "bdev_aio_create", 00:05:45.620 "bdev_ftl_set_property", 00:05:45.620 "bdev_ftl_get_properties", 00:05:45.620 "bdev_ftl_get_stats", 00:05:45.620 "bdev_ftl_unmap", 00:05:45.620 "bdev_ftl_unload", 00:05:45.620 "bdev_ftl_delete", 00:05:45.620 "bdev_ftl_load", 00:05:45.620 "bdev_ftl_create", 00:05:45.620 "bdev_virtio_attach_controller", 00:05:45.620 "bdev_virtio_scsi_get_devices", 00:05:45.620 "bdev_virtio_detach_controller", 00:05:45.620 "bdev_virtio_blk_set_hotplug", 00:05:45.620 "bdev_iscsi_delete", 00:05:45.620 "bdev_iscsi_create", 00:05:45.620 "bdev_iscsi_set_options", 00:05:45.620 "accel_error_inject_error", 00:05:45.620 "ioat_scan_accel_module", 00:05:45.620 "dsa_scan_accel_module", 00:05:45.620 "iaa_scan_accel_module", 00:05:45.620 "vfu_virtio_create_scsi_endpoint", 00:05:45.620 "vfu_virtio_scsi_remove_target", 00:05:45.620 "vfu_virtio_scsi_add_target", 00:05:45.620 "vfu_virtio_create_blk_endpoint", 00:05:45.620 "vfu_virtio_delete_endpoint", 00:05:45.620 "iscsi_set_options", 00:05:45.620 "iscsi_get_auth_groups", 00:05:45.620 "iscsi_auth_group_remove_secret", 00:05:45.620 "iscsi_auth_group_add_secret", 00:05:45.620 "iscsi_delete_auth_group", 00:05:45.620 "iscsi_create_auth_group", 00:05:45.620 "iscsi_set_discovery_auth", 00:05:45.620 "iscsi_get_options", 00:05:45.620 "iscsi_target_node_request_logout", 00:05:45.620 "iscsi_target_node_set_redirect", 00:05:45.620 "iscsi_target_node_set_auth", 00:05:45.620 "iscsi_target_node_add_lun", 00:05:45.620 "iscsi_get_connections", 00:05:45.620 "iscsi_portal_group_set_auth", 00:05:45.620 "iscsi_start_portal_group", 00:05:45.620 "iscsi_delete_portal_group", 00:05:45.620 "iscsi_create_portal_group", 00:05:45.620 "iscsi_get_portal_groups", 00:05:45.620 "iscsi_delete_target_node", 00:05:45.620 "iscsi_target_node_remove_pg_ig_maps", 00:05:45.620 "iscsi_target_node_add_pg_ig_maps", 00:05:45.620 "iscsi_create_target_node", 00:05:45.620 "iscsi_get_target_nodes", 00:05:45.620 "iscsi_delete_initiator_group", 00:05:45.620 "iscsi_initiator_group_remove_initiators", 00:05:45.620 "iscsi_initiator_group_add_initiators", 00:05:45.620 "iscsi_create_initiator_group", 00:05:45.620 "iscsi_get_initiator_groups", 00:05:45.620 "nvmf_set_crdt", 00:05:45.620 "nvmf_set_config", 00:05:45.620 "nvmf_set_max_subsystems", 00:05:45.620 "nvmf_subsystem_get_listeners", 00:05:45.620 "nvmf_subsystem_get_qpairs", 00:05:45.620 "nvmf_subsystem_get_controllers", 00:05:45.620 "nvmf_get_stats", 00:05:45.620 "nvmf_get_transports", 00:05:45.620 "nvmf_create_transport", 00:05:45.620 "nvmf_get_targets", 00:05:45.620 "nvmf_delete_target", 00:05:45.620 "nvmf_create_target", 00:05:45.620 "nvmf_subsystem_allow_any_host", 00:05:45.620 "nvmf_subsystem_remove_host", 00:05:45.620 "nvmf_subsystem_add_host", 00:05:45.620 "nvmf_subsystem_remove_ns", 00:05:45.620 "nvmf_subsystem_add_ns", 00:05:45.620 "nvmf_subsystem_listener_set_ana_state", 00:05:45.620 "nvmf_discovery_get_referrals", 00:05:45.620 "nvmf_discovery_remove_referral", 00:05:45.620 "nvmf_discovery_add_referral", 00:05:45.620 "nvmf_subsystem_remove_listener", 00:05:45.620 "nvmf_subsystem_add_listener", 00:05:45.620 "nvmf_delete_subsystem", 00:05:45.620 "nvmf_create_subsystem", 00:05:45.620 "nvmf_get_subsystems", 00:05:45.620 "env_dpdk_get_mem_stats", 00:05:45.620 "nbd_get_disks", 00:05:45.620 "nbd_stop_disk", 00:05:45.620 "nbd_start_disk", 00:05:45.620 "ublk_recover_disk", 00:05:45.620 "ublk_get_disks", 00:05:45.620 "ublk_stop_disk", 00:05:45.620 "ublk_start_disk", 00:05:45.620 "ublk_destroy_target", 00:05:45.620 "ublk_create_target", 00:05:45.620 "virtio_blk_create_transport", 00:05:45.620 "virtio_blk_get_transports", 00:05:45.620 "vhost_controller_set_coalescing", 00:05:45.620 "vhost_get_controllers", 00:05:45.620 "vhost_delete_controller", 00:05:45.620 "vhost_create_blk_controller", 00:05:45.620 "vhost_scsi_controller_remove_target", 00:05:45.620 "vhost_scsi_controller_add_target", 00:05:45.620 "vhost_start_scsi_controller", 00:05:45.620 "vhost_create_scsi_controller", 00:05:45.620 "thread_set_cpumask", 00:05:45.620 "framework_get_scheduler", 00:05:45.620 "framework_set_scheduler", 00:05:45.620 "framework_get_reactors", 00:05:45.620 "thread_get_io_channels", 00:05:45.620 "thread_get_pollers", 00:05:45.620 "thread_get_stats", 00:05:45.620 "framework_monitor_context_switch", 00:05:45.620 "spdk_kill_instance", 00:05:45.620 "log_enable_timestamps", 00:05:45.620 "log_get_flags", 00:05:45.620 "log_clear_flag", 00:05:45.620 "log_set_flag", 00:05:45.620 "log_get_level", 00:05:45.620 "log_set_level", 00:05:45.620 "log_get_print_level", 00:05:45.620 "log_set_print_level", 00:05:45.620 "framework_enable_cpumask_locks", 00:05:45.620 "framework_disable_cpumask_locks", 00:05:45.620 "framework_wait_init", 00:05:45.620 "framework_start_init", 00:05:45.620 "scsi_get_devices", 00:05:45.620 "bdev_get_histogram", 00:05:45.620 "bdev_enable_histogram", 00:05:45.620 "bdev_set_qos_limit", 00:05:45.620 "bdev_set_qd_sampling_period", 00:05:45.620 "bdev_get_bdevs", 00:05:45.620 "bdev_reset_iostat", 00:05:45.620 "bdev_get_iostat", 00:05:45.620 "bdev_examine", 00:05:45.620 "bdev_wait_for_examine", 00:05:45.620 "bdev_set_options", 00:05:45.620 "notify_get_notifications", 00:05:45.620 "notify_get_types", 00:05:45.620 "accel_get_stats", 00:05:45.620 "accel_set_options", 00:05:45.620 "accel_set_driver", 00:05:45.620 "accel_crypto_key_destroy", 00:05:45.620 "accel_crypto_keys_get", 00:05:45.620 "accel_crypto_key_create", 00:05:45.620 "accel_assign_opc", 00:05:45.620 "accel_get_module_info", 00:05:45.620 "accel_get_opc_assignments", 00:05:45.620 "vmd_rescan", 00:05:45.620 "vmd_remove_device", 00:05:45.620 "vmd_enable", 00:05:45.620 "sock_set_default_impl", 00:05:45.620 "sock_impl_set_options", 00:05:45.620 "sock_impl_get_options", 00:05:45.620 "iobuf_get_stats", 00:05:45.620 "iobuf_set_options", 00:05:45.620 "framework_get_pci_devices", 00:05:45.620 "framework_get_config", 00:05:45.620 "framework_get_subsystems", 00:05:45.620 "vfu_tgt_set_base_path", 00:05:45.620 "trace_get_info", 00:05:45.620 "trace_get_tpoint_group_mask", 00:05:45.620 "trace_disable_tpoint_group", 00:05:45.621 "trace_enable_tpoint_group", 00:05:45.621 "trace_clear_tpoint_mask", 00:05:45.621 "trace_set_tpoint_mask", 00:05:45.621 "spdk_get_version", 00:05:45.621 "rpc_get_methods" 00:05:45.621 ] 00:05:45.621 02:53:40 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:45.621 02:53:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.621 02:53:40 -- common/autotest_common.sh@10 -- # set +x 00:05:45.621 02:53:40 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:45.621 02:53:40 -- spdkcli/tcp.sh@38 -- # killprocess 1881373 00:05:45.621 02:53:40 -- common/autotest_common.sh@926 -- # '[' -z 1881373 ']' 00:05:45.621 02:53:40 -- common/autotest_common.sh@930 -- # kill -0 1881373 00:05:45.621 02:53:40 -- common/autotest_common.sh@931 -- # uname 00:05:45.621 02:53:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:45.621 02:53:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1881373 00:05:45.621 02:53:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:45.621 02:53:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:45.621 02:53:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1881373' 00:05:45.621 killing process with pid 1881373 00:05:45.621 02:53:40 -- common/autotest_common.sh@945 -- # kill 1881373 00:05:45.621 02:53:40 -- common/autotest_common.sh@950 -- # wait 1881373 00:05:46.188 00:05:46.188 real 0m1.721s 00:05:46.188 user 0m3.363s 00:05:46.188 sys 0m0.471s 00:05:46.188 02:53:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.188 02:53:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 ************************************ 00:05:46.188 END TEST spdkcli_tcp 00:05:46.188 ************************************ 00:05:46.188 02:53:41 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.188 02:53:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.188 02:53:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.188 02:53:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.188 ************************************ 00:05:46.188 START TEST dpdk_mem_utility 00:05:46.188 ************************************ 00:05:46.188 02:53:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:46.188 * Looking for test storage... 00:05:46.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:46.188 02:53:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:46.188 02:53:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1881584 00:05:46.188 02:53:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.188 02:53:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1881584 00:05:46.188 02:53:41 -- common/autotest_common.sh@819 -- # '[' -z 1881584 ']' 00:05:46.188 02:53:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.188 02:53:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:46.188 02:53:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.188 02:53:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:46.188 02:53:41 -- common/autotest_common.sh@10 -- # set +x 00:05:46.189 [2024-07-14 02:53:41.392703] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:46.189 [2024-07-14 02:53:41.392782] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881584 ] 00:05:46.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.447 [2024-07-14 02:53:41.452982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.447 [2024-07-14 02:53:41.548724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.447 [2024-07-14 02:53:41.548908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.383 02:53:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:47.383 02:53:42 -- common/autotest_common.sh@852 -- # return 0 00:05:47.383 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:47.383 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:47.383 02:53:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.383 02:53:42 -- common/autotest_common.sh@10 -- # set +x 00:05:47.383 { 00:05:47.383 "filename": "/tmp/spdk_mem_dump.txt" 00:05:47.383 } 00:05:47.383 02:53:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.383 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:47.383 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:47.383 1 heaps totaling size 814.000000 MiB 00:05:47.383 size: 814.000000 MiB heap id: 0 00:05:47.383 end heaps---------- 00:05:47.383 8 mempools totaling size 598.116089 MiB 00:05:47.383 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:47.383 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:47.383 size: 84.521057 MiB name: bdev_io_1881584 00:05:47.383 size: 51.011292 MiB name: evtpool_1881584 00:05:47.383 size: 50.003479 MiB name: msgpool_1881584 00:05:47.383 size: 21.763794 MiB name: PDU_Pool 00:05:47.383 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:47.383 size: 0.026123 MiB name: Session_Pool 00:05:47.383 end mempools------- 00:05:47.383 6 memzones totaling size 4.142822 MiB 00:05:47.383 size: 1.000366 MiB name: RG_ring_0_1881584 00:05:47.383 size: 1.000366 MiB name: RG_ring_1_1881584 00:05:47.383 size: 1.000366 MiB name: RG_ring_4_1881584 00:05:47.383 size: 1.000366 MiB name: RG_ring_5_1881584 00:05:47.383 size: 0.125366 MiB name: RG_ring_2_1881584 00:05:47.383 size: 0.015991 MiB name: RG_ring_3_1881584 00:05:47.383 end memzones------- 00:05:47.383 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:47.383 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:47.383 list of free elements. size: 12.519348 MiB 00:05:47.383 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:47.383 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:47.383 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:47.383 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:47.383 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:47.383 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:47.383 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:47.383 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:47.383 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:47.383 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:47.383 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:47.383 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:47.383 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:47.383 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:47.383 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:47.383 list of standard malloc elements. size: 199.218079 MiB 00:05:47.383 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:47.383 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:47.383 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:47.383 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:47.383 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:47.383 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:47.383 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:47.383 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:47.383 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:47.383 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:47.383 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:47.384 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:47.384 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:47.384 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:47.384 list of memzone associated elements. size: 602.262573 MiB 00:05:47.384 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:47.384 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:47.384 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:47.384 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:47.384 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:47.384 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1881584_0 00:05:47.384 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:47.384 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1881584_0 00:05:47.384 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:47.384 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1881584_0 00:05:47.384 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:47.384 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:47.384 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:47.384 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:47.384 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:47.384 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1881584 00:05:47.384 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:47.384 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1881584 00:05:47.384 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:47.384 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1881584 00:05:47.384 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:47.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:47.384 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:47.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:47.384 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:47.384 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:47.384 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:47.384 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:47.384 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:47.384 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1881584 00:05:47.384 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:47.384 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1881584 00:05:47.384 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:47.384 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1881584 00:05:47.384 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:47.384 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1881584 00:05:47.384 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:47.384 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1881584 00:05:47.384 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:47.384 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:47.384 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:47.384 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:47.384 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:47.384 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:47.384 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:47.384 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1881584 00:05:47.384 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:47.384 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:47.384 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:47.384 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:47.384 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:47.384 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1881584 00:05:47.384 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:47.384 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:47.384 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:47.384 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1881584 00:05:47.384 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:47.384 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1881584 00:05:47.384 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:47.384 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:47.384 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:47.384 02:53:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1881584 00:05:47.384 02:53:42 -- common/autotest_common.sh@926 -- # '[' -z 1881584 ']' 00:05:47.384 02:53:42 -- common/autotest_common.sh@930 -- # kill -0 1881584 00:05:47.384 02:53:42 -- common/autotest_common.sh@931 -- # uname 00:05:47.384 02:53:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.384 02:53:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1881584 00:05:47.384 02:53:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.384 02:53:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.384 02:53:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1881584' 00:05:47.384 killing process with pid 1881584 00:05:47.384 02:53:42 -- common/autotest_common.sh@945 -- # kill 1881584 00:05:47.384 02:53:42 -- common/autotest_common.sh@950 -- # wait 1881584 00:05:47.643 00:05:47.643 real 0m1.567s 00:05:47.643 user 0m1.704s 00:05:47.643 sys 0m0.458s 00:05:47.643 02:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.643 02:53:42 -- common/autotest_common.sh@10 -- # set +x 00:05:47.643 ************************************ 00:05:47.643 END TEST dpdk_mem_utility 00:05:47.643 ************************************ 00:05:47.643 02:53:42 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.643 02:53:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.643 02:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.643 02:53:42 -- common/autotest_common.sh@10 -- # set +x 00:05:47.643 ************************************ 00:05:47.643 START TEST event 00:05:47.643 ************************************ 00:05:47.643 02:53:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:47.902 * Looking for test storage... 00:05:47.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:47.902 02:53:42 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:47.902 02:53:42 -- bdev/nbd_common.sh@6 -- # set -e 00:05:47.902 02:53:42 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.902 02:53:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:47.902 02:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.902 02:53:42 -- common/autotest_common.sh@10 -- # set +x 00:05:47.902 ************************************ 00:05:47.902 START TEST event_perf 00:05:47.902 ************************************ 00:05:47.902 02:53:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:47.902 Running I/O for 1 seconds...[2024-07-14 02:53:42.957702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:47.902 [2024-07-14 02:53:42.957781] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1881903 ] 00:05:47.902 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.902 [2024-07-14 02:53:43.020054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.902 [2024-07-14 02:53:43.110679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.902 [2024-07-14 02:53:43.110738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.902 [2024-07-14 02:53:43.110835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.902 [2024-07-14 02:53:43.110838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.276 Running I/O for 1 seconds... 00:05:49.276 lcore 0: 238596 00:05:49.276 lcore 1: 238595 00:05:49.276 lcore 2: 238596 00:05:49.276 lcore 3: 238595 00:05:49.276 done. 00:05:49.276 00:05:49.276 real 0m1.250s 00:05:49.276 user 0m4.164s 00:05:49.276 sys 0m0.082s 00:05:49.276 02:53:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.276 02:53:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.276 ************************************ 00:05:49.276 END TEST event_perf 00:05:49.276 ************************************ 00:05:49.276 02:53:44 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.276 02:53:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:49.276 02:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.276 02:53:44 -- common/autotest_common.sh@10 -- # set +x 00:05:49.276 ************************************ 00:05:49.276 START TEST event_reactor 00:05:49.276 ************************************ 00:05:49.276 02:53:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:49.276 [2024-07-14 02:53:44.232230] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:49.277 [2024-07-14 02:53:44.232314] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882062 ] 00:05:49.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.277 [2024-07-14 02:53:44.293279] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.277 [2024-07-14 02:53:44.383565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.211 test_start 00:05:50.211 oneshot 00:05:50.211 tick 100 00:05:50.211 tick 100 00:05:50.211 tick 250 00:05:50.211 tick 100 00:05:50.211 tick 100 00:05:50.211 tick 100 00:05:50.211 tick 250 00:05:50.211 tick 500 00:05:50.211 tick 100 00:05:50.211 tick 100 00:05:50.211 tick 250 00:05:50.211 tick 100 00:05:50.211 tick 100 00:05:50.211 test_end 00:05:50.470 00:05:50.470 real 0m1.247s 00:05:50.470 user 0m1.163s 00:05:50.470 sys 0m0.079s 00:05:50.470 02:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.470 02:53:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.470 ************************************ 00:05:50.470 END TEST event_reactor 00:05:50.470 ************************************ 00:05:50.470 02:53:45 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.470 02:53:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:50.470 02:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:50.470 02:53:45 -- common/autotest_common.sh@10 -- # set +x 00:05:50.470 ************************************ 00:05:50.470 START TEST event_reactor_perf 00:05:50.470 ************************************ 00:05:50.470 02:53:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:50.470 [2024-07-14 02:53:45.508717] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:50.470 [2024-07-14 02:53:45.508803] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882222 ] 00:05:50.470 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.470 [2024-07-14 02:53:45.572833] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.470 [2024-07-14 02:53:45.662315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.845 test_start 00:05:51.845 test_end 00:05:51.845 Performance: 352980 events per second 00:05:51.845 00:05:51.845 real 0m1.251s 00:05:51.845 user 0m1.159s 00:05:51.845 sys 0m0.086s 00:05:51.845 02:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.845 02:53:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.845 ************************************ 00:05:51.845 END TEST event_reactor_perf 00:05:51.845 ************************************ 00:05:51.845 02:53:46 -- event/event.sh@49 -- # uname -s 00:05:51.845 02:53:46 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.845 02:53:46 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.845 02:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.845 02:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.845 02:53:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.845 ************************************ 00:05:51.845 START TEST event_scheduler 00:05:51.845 ************************************ 00:05:51.845 02:53:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:51.845 * Looking for test storage... 00:05:51.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:51.845 02:53:46 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.845 02:53:46 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1882403 00:05:51.845 02:53:46 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.845 02:53:46 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.845 02:53:46 -- scheduler/scheduler.sh@37 -- # waitforlisten 1882403 00:05:51.845 02:53:46 -- common/autotest_common.sh@819 -- # '[' -z 1882403 ']' 00:05:51.845 02:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.845 02:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.845 02:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.845 02:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.845 02:53:46 -- common/autotest_common.sh@10 -- # set +x 00:05:51.845 [2024-07-14 02:53:46.864573] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:51.845 [2024-07-14 02:53:46.864644] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882403 ] 00:05:51.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.845 [2024-07-14 02:53:46.922497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.845 [2024-07-14 02:53:47.005189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.845 [2024-07-14 02:53:47.005246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.845 [2024-07-14 02:53:47.005311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.845 [2024-07-14 02:53:47.005314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.845 02:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:51.845 02:53:47 -- common/autotest_common.sh@852 -- # return 0 00:05:51.845 02:53:47 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:51.845 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:51.845 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:51.845 POWER: Env isn't set yet! 00:05:51.845 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:51.845 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:05:51.845 POWER: Cannot get available frequencies of lcore 0 00:05:51.845 POWER: Attempting to initialise PSTAT power management... 00:05:51.845 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:51.845 POWER: Initialized successfully for lcore 0 power management 00:05:51.845 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:51.845 POWER: Initialized successfully for lcore 1 power management 00:05:51.845 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:51.845 POWER: Initialized successfully for lcore 2 power management 00:05:52.104 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:52.104 POWER: Initialized successfully for lcore 3 power management 00:05:52.104 [2024-07-14 02:53:47.103082] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.104 [2024-07-14 02:53:47.103102] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.104 [2024-07-14 02:53:47.103115] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 [2024-07-14 02:53:47.198459] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:52.104 02:53:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.104 02:53:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 ************************************ 00:05:52.104 START TEST scheduler_create_thread 00:05:52.104 ************************************ 00:05:52.104 02:53:47 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 2 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 3 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 4 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 5 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 6 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 7 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 8 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.104 02:53:47 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:52.104 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.104 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.104 9 00:05:52.104 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.105 02:53:47 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:52.105 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.105 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.105 10 00:05:52.105 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.105 02:53:47 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:52.105 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.105 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.105 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.105 02:53:47 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:52.105 02:53:47 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:52.105 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.105 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:52.105 02:53:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:52.105 02:53:47 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:52.105 02:53:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:52.105 02:53:47 -- common/autotest_common.sh@10 -- # set +x 00:05:54.006 02:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.006 02:53:48 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:54.006 02:53:48 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:54.006 02:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:54.006 02:53:48 -- common/autotest_common.sh@10 -- # set +x 00:05:54.572 02:53:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:54.572 00:05:54.572 real 0m2.617s 00:05:54.572 user 0m0.011s 00:05:54.572 sys 0m0.004s 00:05:54.572 02:53:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.572 02:53:49 -- common/autotest_common.sh@10 -- # set +x 00:05:54.572 ************************************ 00:05:54.572 END TEST scheduler_create_thread 00:05:54.572 ************************************ 00:05:54.830 02:53:49 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:54.830 02:53:49 -- scheduler/scheduler.sh@46 -- # killprocess 1882403 00:05:54.830 02:53:49 -- common/autotest_common.sh@926 -- # '[' -z 1882403 ']' 00:05:54.830 02:53:49 -- common/autotest_common.sh@930 -- # kill -0 1882403 00:05:54.830 02:53:49 -- common/autotest_common.sh@931 -- # uname 00:05:54.830 02:53:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.830 02:53:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1882403 00:05:54.830 02:53:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:54.830 02:53:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:54.830 02:53:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1882403' 00:05:54.830 killing process with pid 1882403 00:05:54.830 02:53:49 -- common/autotest_common.sh@945 -- # kill 1882403 00:05:54.830 02:53:49 -- common/autotest_common.sh@950 -- # wait 1882403 00:05:55.088 [2024-07-14 02:53:50.301570] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:55.348 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:05:55.348 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:55.348 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:05:55.348 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:55.348 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:05:55.348 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:55.348 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:05:55.348 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:55.348 00:05:55.348 real 0m3.751s 00:05:55.348 user 0m5.726s 00:05:55.348 sys 0m0.298s 00:05:55.348 02:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.348 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.348 ************************************ 00:05:55.348 END TEST event_scheduler 00:05:55.348 ************************************ 00:05:55.348 02:53:50 -- event/event.sh@51 -- # modprobe -n nbd 00:05:55.348 02:53:50 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:55.348 02:53:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:55.348 02:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.348 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.348 ************************************ 00:05:55.348 START TEST app_repeat 00:05:55.348 ************************************ 00:05:55.348 02:53:50 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:55.348 02:53:50 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.348 02:53:50 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.348 02:53:50 -- event/event.sh@13 -- # local nbd_list 00:05:55.348 02:53:50 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.348 02:53:50 -- event/event.sh@14 -- # local bdev_list 00:05:55.348 02:53:50 -- event/event.sh@15 -- # local repeat_times=4 00:05:55.348 02:53:50 -- event/event.sh@17 -- # modprobe nbd 00:05:55.348 02:53:50 -- event/event.sh@19 -- # repeat_pid=1882991 00:05:55.348 02:53:50 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:55.348 02:53:50 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.348 02:53:50 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1882991' 00:05:55.348 Process app_repeat pid: 1882991 00:05:55.348 02:53:50 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.348 02:53:50 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:55.348 spdk_app_start Round 0 00:05:55.348 02:53:50 -- event/event.sh@25 -- # waitforlisten 1882991 /var/tmp/spdk-nbd.sock 00:05:55.348 02:53:50 -- common/autotest_common.sh@819 -- # '[' -z 1882991 ']' 00:05:55.348 02:53:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.348 02:53:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.348 02:53:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.348 02:53:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.348 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:05:55.348 [2024-07-14 02:53:50.580203] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:55.348 [2024-07-14 02:53:50.580282] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1882991 ] 00:05:55.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.606 [2024-07-14 02:53:50.638801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:55.606 [2024-07-14 02:53:50.724007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.606 [2024-07-14 02:53:50.724011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.540 02:53:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:56.540 02:53:51 -- common/autotest_common.sh@852 -- # return 0 00:05:56.540 02:53:51 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:56.540 Malloc0 00:05:56.798 02:53:51 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:57.087 Malloc1 00:05:57.087 02:53:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:57.087 /dev/nbd0 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:57.087 02:53:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:57.087 02:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:57.087 02:53:52 -- common/autotest_common.sh@857 -- # local i 00:05:57.087 02:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:57.087 02:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:57.087 02:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:57.087 02:53:52 -- common/autotest_common.sh@861 -- # break 00:05:57.087 02:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:57.087 02:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:57.087 02:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.346 1+0 records in 00:05:57.346 1+0 records out 00:05:57.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000189231 s, 21.6 MB/s 00:05:57.346 02:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.346 02:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:05:57.346 02:53:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.346 02:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:57.346 02:53:52 -- common/autotest_common.sh@877 -- # return 0 00:05:57.346 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.346 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.346 02:53:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:57.346 /dev/nbd1 00:05:57.346 02:53:52 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:57.346 02:53:52 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:57.346 02:53:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:57.346 02:53:52 -- common/autotest_common.sh@857 -- # local i 00:05:57.346 02:53:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:57.346 02:53:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:57.346 02:53:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:57.346 02:53:52 -- common/autotest_common.sh@861 -- # break 00:05:57.346 02:53:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:57.346 02:53:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:57.346 02:53:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:57.346 1+0 records in 00:05:57.346 1+0 records out 00:05:57.346 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207417 s, 19.7 MB/s 00:05:57.604 02:53:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.604 02:53:52 -- common/autotest_common.sh@874 -- # size=4096 00:05:57.604 02:53:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:57.604 02:53:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:57.604 02:53:52 -- common/autotest_common.sh@877 -- # return 0 00:05:57.604 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:57.604 02:53:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:57.604 02:53:52 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.604 02:53:52 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.604 02:53:52 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:57.862 { 00:05:57.862 "nbd_device": "/dev/nbd0", 00:05:57.862 "bdev_name": "Malloc0" 00:05:57.862 }, 00:05:57.862 { 00:05:57.862 "nbd_device": "/dev/nbd1", 00:05:57.862 "bdev_name": "Malloc1" 00:05:57.862 } 00:05:57.862 ]' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:57.862 { 00:05:57.862 "nbd_device": "/dev/nbd0", 00:05:57.862 "bdev_name": "Malloc0" 00:05:57.862 }, 00:05:57.862 { 00:05:57.862 "nbd_device": "/dev/nbd1", 00:05:57.862 "bdev_name": "Malloc1" 00:05:57.862 } 00:05:57.862 ]' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:57.862 /dev/nbd1' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:57.862 /dev/nbd1' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@65 -- # count=2 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@95 -- # count=2 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513962 s, 204 MB/s 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237036 s, 44.2 MB/s 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:57.862 256+0 records in 00:05:57.862 256+0 records out 00:05:57.862 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0248875 s, 42.1 MB/s 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:57.862 02:53:52 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@51 -- # local i 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.863 02:53:52 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@41 -- # break 00:05:58.119 02:53:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.120 02:53:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.120 02:53:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@41 -- # break 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.376 02:53:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@65 -- # true 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@104 -- # count=0 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:58.633 02:53:53 -- bdev/nbd_common.sh@109 -- # return 0 00:05:58.633 02:53:53 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:58.891 02:53:54 -- event/event.sh@35 -- # sleep 3 00:05:59.149 [2024-07-14 02:53:54.276643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.149 [2024-07-14 02:53:54.364101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.149 [2024-07-14 02:53:54.364102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.407 [2024-07-14 02:53:54.426036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:59.407 [2024-07-14 02:53:54.426098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.958 02:53:57 -- event/event.sh@23 -- # for i in {0..2} 00:06:01.958 02:53:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:01.958 spdk_app_start Round 1 00:06:01.958 02:53:57 -- event/event.sh@25 -- # waitforlisten 1882991 /var/tmp/spdk-nbd.sock 00:06:01.958 02:53:57 -- common/autotest_common.sh@819 -- # '[' -z 1882991 ']' 00:06:01.958 02:53:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.958 02:53:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.958 02:53:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.958 02:53:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.958 02:53:57 -- common/autotest_common.sh@10 -- # set +x 00:06:02.215 02:53:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:02.215 02:53:57 -- common/autotest_common.sh@852 -- # return 0 00:06:02.215 02:53:57 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.472 Malloc0 00:06:02.472 02:53:57 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:02.729 Malloc1 00:06:02.729 02:53:57 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@12 -- # local i 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.729 02:53:57 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.986 /dev/nbd0 00:06:02.986 02:53:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.986 02:53:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.986 02:53:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:02.986 02:53:58 -- common/autotest_common.sh@857 -- # local i 00:06:02.986 02:53:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:02.986 02:53:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:02.986 02:53:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:02.986 02:53:58 -- common/autotest_common.sh@861 -- # break 00:06:02.986 02:53:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:02.986 02:53:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:02.986 02:53:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.987 1+0 records in 00:06:02.987 1+0 records out 00:06:02.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000173762 s, 23.6 MB/s 00:06:02.987 02:53:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.987 02:53:58 -- common/autotest_common.sh@874 -- # size=4096 00:06:02.987 02:53:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:02.987 02:53:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:02.987 02:53:58 -- common/autotest_common.sh@877 -- # return 0 00:06:02.987 02:53:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.987 02:53:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.987 02:53:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:03.243 /dev/nbd1 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:03.243 02:53:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:03.243 02:53:58 -- common/autotest_common.sh@857 -- # local i 00:06:03.243 02:53:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:03.243 02:53:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:03.243 02:53:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:03.243 02:53:58 -- common/autotest_common.sh@861 -- # break 00:06:03.243 02:53:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:03.243 02:53:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:03.243 02:53:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:03.243 1+0 records in 00:06:03.243 1+0 records out 00:06:03.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213974 s, 19.1 MB/s 00:06:03.243 02:53:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.243 02:53:58 -- common/autotest_common.sh@874 -- # size=4096 00:06:03.243 02:53:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:03.243 02:53:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:03.243 02:53:58 -- common/autotest_common.sh@877 -- # return 0 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.243 02:53:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:03.501 { 00:06:03.501 "nbd_device": "/dev/nbd0", 00:06:03.501 "bdev_name": "Malloc0" 00:06:03.501 }, 00:06:03.501 { 00:06:03.501 "nbd_device": "/dev/nbd1", 00:06:03.501 "bdev_name": "Malloc1" 00:06:03.501 } 00:06:03.501 ]' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:03.501 { 00:06:03.501 "nbd_device": "/dev/nbd0", 00:06:03.501 "bdev_name": "Malloc0" 00:06:03.501 }, 00:06:03.501 { 00:06:03.501 "nbd_device": "/dev/nbd1", 00:06:03.501 "bdev_name": "Malloc1" 00:06:03.501 } 00:06:03.501 ]' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:03.501 /dev/nbd1' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:03.501 /dev/nbd1' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@65 -- # count=2 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@95 -- # count=2 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:03.501 256+0 records in 00:06:03.501 256+0 records out 00:06:03.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00471213 s, 223 MB/s 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:03.501 256+0 records in 00:06:03.501 256+0 records out 00:06:03.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0206801 s, 50.7 MB/s 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:03.501 256+0 records in 00:06:03.501 256+0 records out 00:06:03.501 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223033 s, 47.0 MB/s 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@51 -- # local i 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.501 02:53:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@41 -- # break 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.758 02:53:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@41 -- # break 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.015 02:53:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@65 -- # true 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.274 02:53:59 -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.274 02:53:59 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.533 02:53:59 -- event/event.sh@35 -- # sleep 3 00:06:04.791 [2024-07-14 02:53:59.959689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.049 [2024-07-14 02:54:00.054031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.049 [2024-07-14 02:54:00.054037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.049 [2024-07-14 02:54:00.112536] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.049 [2024-07-14 02:54:00.112617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.600 02:54:02 -- event/event.sh@23 -- # for i in {0..2} 00:06:07.600 02:54:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:07.600 spdk_app_start Round 2 00:06:07.600 02:54:02 -- event/event.sh@25 -- # waitforlisten 1882991 /var/tmp/spdk-nbd.sock 00:06:07.600 02:54:02 -- common/autotest_common.sh@819 -- # '[' -z 1882991 ']' 00:06:07.600 02:54:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.600 02:54:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.600 02:54:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.600 02:54:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.600 02:54:02 -- common/autotest_common.sh@10 -- # set +x 00:06:07.858 02:54:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:07.858 02:54:02 -- common/autotest_common.sh@852 -- # return 0 00:06:07.858 02:54:02 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.116 Malloc0 00:06:08.116 02:54:03 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.374 Malloc1 00:06:08.374 02:54:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@12 -- # local i 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.374 02:54:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.631 /dev/nbd0 00:06:08.631 02:54:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:08.631 02:54:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:08.631 02:54:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:08.632 02:54:03 -- common/autotest_common.sh@857 -- # local i 00:06:08.632 02:54:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.632 02:54:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.632 02:54:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:08.632 02:54:03 -- common/autotest_common.sh@861 -- # break 00:06:08.632 02:54:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.632 02:54:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.632 02:54:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.632 1+0 records in 00:06:08.632 1+0 records out 00:06:08.632 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000144193 s, 28.4 MB/s 00:06:08.632 02:54:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.632 02:54:03 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.632 02:54:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.632 02:54:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.632 02:54:03 -- common/autotest_common.sh@877 -- # return 0 00:06:08.632 02:54:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.632 02:54:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.632 02:54:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:08.889 /dev/nbd1 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:08.889 02:54:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:08.889 02:54:04 -- common/autotest_common.sh@857 -- # local i 00:06:08.889 02:54:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:08.889 02:54:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:08.889 02:54:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:08.889 02:54:04 -- common/autotest_common.sh@861 -- # break 00:06:08.889 02:54:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:08.889 02:54:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:08.889 02:54:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:08.889 1+0 records in 00:06:08.889 1+0 records out 00:06:08.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223921 s, 18.3 MB/s 00:06:08.889 02:54:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.889 02:54:04 -- common/autotest_common.sh@874 -- # size=4096 00:06:08.889 02:54:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:08.889 02:54:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:08.889 02:54:04 -- common/autotest_common.sh@877 -- # return 0 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.889 02:54:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.147 { 00:06:09.147 "nbd_device": "/dev/nbd0", 00:06:09.147 "bdev_name": "Malloc0" 00:06:09.147 }, 00:06:09.147 { 00:06:09.147 "nbd_device": "/dev/nbd1", 00:06:09.147 "bdev_name": "Malloc1" 00:06:09.147 } 00:06:09.147 ]' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.147 { 00:06:09.147 "nbd_device": "/dev/nbd0", 00:06:09.147 "bdev_name": "Malloc0" 00:06:09.147 }, 00:06:09.147 { 00:06:09.147 "nbd_device": "/dev/nbd1", 00:06:09.147 "bdev_name": "Malloc1" 00:06:09.147 } 00:06:09.147 ]' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.147 /dev/nbd1' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.147 /dev/nbd1' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.147 256+0 records in 00:06:09.147 256+0 records out 00:06:09.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00510083 s, 206 MB/s 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.147 256+0 records in 00:06:09.147 256+0 records out 00:06:09.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212313 s, 49.4 MB/s 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.147 256+0 records in 00:06:09.147 256+0 records out 00:06:09.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222871 s, 47.0 MB/s 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@51 -- # local i 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.147 02:54:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@41 -- # break 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.405 02:54:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@41 -- # break 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@45 -- # return 0 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.663 02:54:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.921 02:54:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:09.921 02:54:05 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:09.921 02:54:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@65 -- # true 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.180 02:54:05 -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.180 02:54:05 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:10.439 02:54:05 -- event/event.sh@35 -- # sleep 3 00:06:10.439 [2024-07-14 02:54:05.676882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.697 [2024-07-14 02:54:05.764582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.697 [2024-07-14 02:54:05.764587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.697 [2024-07-14 02:54:05.822965] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.697 [2024-07-14 02:54:05.823026] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:13.253 02:54:08 -- event/event.sh@38 -- # waitforlisten 1882991 /var/tmp/spdk-nbd.sock 00:06:13.253 02:54:08 -- common/autotest_common.sh@819 -- # '[' -z 1882991 ']' 00:06:13.253 02:54:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:13.253 02:54:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.253 02:54:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:13.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:13.253 02:54:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.253 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:13.511 02:54:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:13.511 02:54:08 -- common/autotest_common.sh@852 -- # return 0 00:06:13.511 02:54:08 -- event/event.sh@39 -- # killprocess 1882991 00:06:13.511 02:54:08 -- common/autotest_common.sh@926 -- # '[' -z 1882991 ']' 00:06:13.511 02:54:08 -- common/autotest_common.sh@930 -- # kill -0 1882991 00:06:13.511 02:54:08 -- common/autotest_common.sh@931 -- # uname 00:06:13.511 02:54:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:13.511 02:54:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1882991 00:06:13.511 02:54:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:13.512 02:54:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:13.512 02:54:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1882991' 00:06:13.512 killing process with pid 1882991 00:06:13.512 02:54:08 -- common/autotest_common.sh@945 -- # kill 1882991 00:06:13.512 02:54:08 -- common/autotest_common.sh@950 -- # wait 1882991 00:06:13.771 spdk_app_start is called in Round 0. 00:06:13.771 Shutdown signal received, stop current app iteration 00:06:13.771 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:13.771 spdk_app_start is called in Round 1. 00:06:13.771 Shutdown signal received, stop current app iteration 00:06:13.771 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:13.771 spdk_app_start is called in Round 2. 00:06:13.771 Shutdown signal received, stop current app iteration 00:06:13.771 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:13.771 spdk_app_start is called in Round 3. 00:06:13.771 Shutdown signal received, stop current app iteration 00:06:13.771 02:54:08 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:13.771 02:54:08 -- event/event.sh@42 -- # return 0 00:06:13.771 00:06:13.771 real 0m18.362s 00:06:13.771 user 0m39.930s 00:06:13.771 sys 0m3.244s 00:06:13.771 02:54:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.771 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:13.771 ************************************ 00:06:13.771 END TEST app_repeat 00:06:13.771 ************************************ 00:06:13.771 02:54:08 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:13.771 02:54:08 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.771 02:54:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.771 02:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.771 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:13.771 ************************************ 00:06:13.771 START TEST cpu_locks 00:06:13.771 ************************************ 00:06:13.771 02:54:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:13.771 * Looking for test storage... 00:06:13.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:13.771 02:54:08 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:13.771 02:54:08 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:13.771 02:54:08 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:13.771 02:54:08 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:13.771 02:54:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:13.771 02:54:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.771 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:13.771 ************************************ 00:06:13.771 START TEST default_locks 00:06:13.771 ************************************ 00:06:13.771 02:54:08 -- common/autotest_common.sh@1104 -- # default_locks 00:06:13.771 02:54:08 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1886013 00:06:13.771 02:54:08 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:13.771 02:54:08 -- event/cpu_locks.sh@47 -- # waitforlisten 1886013 00:06:13.771 02:54:08 -- common/autotest_common.sh@819 -- # '[' -z 1886013 ']' 00:06:13.771 02:54:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.771 02:54:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:13.771 02:54:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.771 02:54:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:13.771 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:06:14.030 [2024-07-14 02:54:09.044240] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:14.030 [2024-07-14 02:54:09.044335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886013 ] 00:06:14.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.030 [2024-07-14 02:54:09.101213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.030 [2024-07-14 02:54:09.184048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:14.030 [2024-07-14 02:54:09.184240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.964 02:54:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.964 02:54:10 -- common/autotest_common.sh@852 -- # return 0 00:06:14.964 02:54:10 -- event/cpu_locks.sh@49 -- # locks_exist 1886013 00:06:14.964 02:54:10 -- event/cpu_locks.sh@22 -- # lslocks -p 1886013 00:06:14.964 02:54:10 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:15.222 lslocks: write error 00:06:15.222 02:54:10 -- event/cpu_locks.sh@50 -- # killprocess 1886013 00:06:15.222 02:54:10 -- common/autotest_common.sh@926 -- # '[' -z 1886013 ']' 00:06:15.222 02:54:10 -- common/autotest_common.sh@930 -- # kill -0 1886013 00:06:15.222 02:54:10 -- common/autotest_common.sh@931 -- # uname 00:06:15.222 02:54:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:15.222 02:54:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1886013 00:06:15.222 02:54:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:15.222 02:54:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:15.222 02:54:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1886013' 00:06:15.222 killing process with pid 1886013 00:06:15.222 02:54:10 -- common/autotest_common.sh@945 -- # kill 1886013 00:06:15.222 02:54:10 -- common/autotest_common.sh@950 -- # wait 1886013 00:06:15.789 02:54:10 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1886013 00:06:15.789 02:54:10 -- common/autotest_common.sh@640 -- # local es=0 00:06:15.789 02:54:10 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1886013 00:06:15.789 02:54:10 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:15.789 02:54:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.789 02:54:10 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:15.789 02:54:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:15.789 02:54:10 -- common/autotest_common.sh@643 -- # waitforlisten 1886013 00:06:15.789 02:54:10 -- common/autotest_common.sh@819 -- # '[' -z 1886013 ']' 00:06:15.789 02:54:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.789 02:54:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.789 02:54:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.789 02:54:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.789 02:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.789 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1886013) - No such process 00:06:15.789 ERROR: process (pid: 1886013) is no longer running 00:06:15.789 02:54:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.789 02:54:10 -- common/autotest_common.sh@852 -- # return 1 00:06:15.789 02:54:10 -- common/autotest_common.sh@643 -- # es=1 00:06:15.789 02:54:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:15.789 02:54:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:15.789 02:54:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:15.789 02:54:10 -- event/cpu_locks.sh@54 -- # no_locks 00:06:15.789 02:54:10 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:15.789 02:54:10 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:15.789 02:54:10 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:15.789 00:06:15.789 real 0m1.786s 00:06:15.789 user 0m1.951s 00:06:15.789 sys 0m0.572s 00:06:15.789 02:54:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.789 02:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.789 ************************************ 00:06:15.789 END TEST default_locks 00:06:15.789 ************************************ 00:06:15.789 02:54:10 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:15.789 02:54:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.789 02:54:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.789 02:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.789 ************************************ 00:06:15.789 START TEST default_locks_via_rpc 00:06:15.789 ************************************ 00:06:15.789 02:54:10 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:15.789 02:54:10 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1886326 00:06:15.789 02:54:10 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.789 02:54:10 -- event/cpu_locks.sh@63 -- # waitforlisten 1886326 00:06:15.789 02:54:10 -- common/autotest_common.sh@819 -- # '[' -z 1886326 ']' 00:06:15.789 02:54:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.789 02:54:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:15.789 02:54:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.789 02:54:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:15.789 02:54:10 -- common/autotest_common.sh@10 -- # set +x 00:06:15.789 [2024-07-14 02:54:10.854082] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:15.790 [2024-07-14 02:54:10.854181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886326 ] 00:06:15.790 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.790 [2024-07-14 02:54:10.914651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.790 [2024-07-14 02:54:11.007019] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:15.790 [2024-07-14 02:54:11.007212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.722 02:54:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:16.722 02:54:11 -- common/autotest_common.sh@852 -- # return 0 00:06:16.722 02:54:11 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:16.722 02:54:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.722 02:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:16.722 02:54:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.722 02:54:11 -- event/cpu_locks.sh@67 -- # no_locks 00:06:16.722 02:54:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.722 02:54:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.722 02:54:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.722 02:54:11 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:16.722 02:54:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:16.722 02:54:11 -- common/autotest_common.sh@10 -- # set +x 00:06:16.722 02:54:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.722 02:54:11 -- event/cpu_locks.sh@71 -- # locks_exist 1886326 00:06:16.722 02:54:11 -- event/cpu_locks.sh@22 -- # lslocks -p 1886326 00:06:16.722 02:54:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.286 02:54:12 -- event/cpu_locks.sh@73 -- # killprocess 1886326 00:06:17.286 02:54:12 -- common/autotest_common.sh@926 -- # '[' -z 1886326 ']' 00:06:17.286 02:54:12 -- common/autotest_common.sh@930 -- # kill -0 1886326 00:06:17.286 02:54:12 -- common/autotest_common.sh@931 -- # uname 00:06:17.286 02:54:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:17.287 02:54:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1886326 00:06:17.287 02:54:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:17.287 02:54:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:17.287 02:54:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1886326' 00:06:17.287 killing process with pid 1886326 00:06:17.287 02:54:12 -- common/autotest_common.sh@945 -- # kill 1886326 00:06:17.287 02:54:12 -- common/autotest_common.sh@950 -- # wait 1886326 00:06:17.545 00:06:17.545 real 0m1.951s 00:06:17.545 user 0m2.142s 00:06:17.545 sys 0m0.606s 00:06:17.545 02:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.545 02:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.545 ************************************ 00:06:17.545 END TEST default_locks_via_rpc 00:06:17.545 ************************************ 00:06:17.545 02:54:12 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:17.545 02:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.545 02:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.545 02:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.545 ************************************ 00:06:17.545 START TEST non_locking_app_on_locked_coremask 00:06:17.545 ************************************ 00:06:17.545 02:54:12 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:17.545 02:54:12 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1886500 00:06:17.545 02:54:12 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.545 02:54:12 -- event/cpu_locks.sh@81 -- # waitforlisten 1886500 /var/tmp/spdk.sock 00:06:17.545 02:54:12 -- common/autotest_common.sh@819 -- # '[' -z 1886500 ']' 00:06:17.545 02:54:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.545 02:54:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.545 02:54:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.545 02:54:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.545 02:54:12 -- common/autotest_common.sh@10 -- # set +x 00:06:17.803 [2024-07-14 02:54:12.833216] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:17.803 [2024-07-14 02:54:12.833306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886500 ] 00:06:17.803 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.803 [2024-07-14 02:54:12.891229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.803 [2024-07-14 02:54:12.976518] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:17.803 [2024-07-14 02:54:12.976689] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.733 02:54:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.733 02:54:13 -- common/autotest_common.sh@852 -- # return 0 00:06:18.733 02:54:13 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1886638 00:06:18.733 02:54:13 -- event/cpu_locks.sh@85 -- # waitforlisten 1886638 /var/tmp/spdk2.sock 00:06:18.733 02:54:13 -- common/autotest_common.sh@819 -- # '[' -z 1886638 ']' 00:06:18.733 02:54:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.733 02:54:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.733 02:54:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.733 02:54:13 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.733 02:54:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.733 02:54:13 -- common/autotest_common.sh@10 -- # set +x 00:06:18.733 [2024-07-14 02:54:13.826577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:18.733 [2024-07-14 02:54:13.826653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1886638 ] 00:06:18.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.733 [2024-07-14 02:54:13.922960] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.733 [2024-07-14 02:54:13.922996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.990 [2024-07-14 02:54:14.106311] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:18.990 [2024-07-14 02:54:14.106500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.555 02:54:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:19.555 02:54:14 -- common/autotest_common.sh@852 -- # return 0 00:06:19.555 02:54:14 -- event/cpu_locks.sh@87 -- # locks_exist 1886500 00:06:19.555 02:54:14 -- event/cpu_locks.sh@22 -- # lslocks -p 1886500 00:06:19.555 02:54:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.119 lslocks: write error 00:06:20.119 02:54:15 -- event/cpu_locks.sh@89 -- # killprocess 1886500 00:06:20.119 02:54:15 -- common/autotest_common.sh@926 -- # '[' -z 1886500 ']' 00:06:20.119 02:54:15 -- common/autotest_common.sh@930 -- # kill -0 1886500 00:06:20.119 02:54:15 -- common/autotest_common.sh@931 -- # uname 00:06:20.119 02:54:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:20.119 02:54:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1886500 00:06:20.119 02:54:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:20.119 02:54:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:20.119 02:54:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1886500' 00:06:20.119 killing process with pid 1886500 00:06:20.119 02:54:15 -- common/autotest_common.sh@945 -- # kill 1886500 00:06:20.119 02:54:15 -- common/autotest_common.sh@950 -- # wait 1886500 00:06:21.053 02:54:16 -- event/cpu_locks.sh@90 -- # killprocess 1886638 00:06:21.053 02:54:16 -- common/autotest_common.sh@926 -- # '[' -z 1886638 ']' 00:06:21.053 02:54:16 -- common/autotest_common.sh@930 -- # kill -0 1886638 00:06:21.053 02:54:16 -- common/autotest_common.sh@931 -- # uname 00:06:21.053 02:54:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:21.053 02:54:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1886638 00:06:21.053 02:54:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:21.053 02:54:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:21.053 02:54:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1886638' 00:06:21.053 killing process with pid 1886638 00:06:21.053 02:54:16 -- common/autotest_common.sh@945 -- # kill 1886638 00:06:21.053 02:54:16 -- common/autotest_common.sh@950 -- # wait 1886638 00:06:21.312 00:06:21.312 real 0m3.735s 00:06:21.312 user 0m4.052s 00:06:21.312 sys 0m1.078s 00:06:21.312 02:54:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.312 02:54:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.312 ************************************ 00:06:21.312 END TEST non_locking_app_on_locked_coremask 00:06:21.312 ************************************ 00:06:21.312 02:54:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.312 02:54:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:21.312 02:54:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.312 02:54:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.312 ************************************ 00:06:21.312 START TEST locking_app_on_unlocked_coremask 00:06:21.312 ************************************ 00:06:21.312 02:54:16 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:21.312 02:54:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1887073 00:06:21.312 02:54:16 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.312 02:54:16 -- event/cpu_locks.sh@99 -- # waitforlisten 1887073 /var/tmp/spdk.sock 00:06:21.312 02:54:16 -- common/autotest_common.sh@819 -- # '[' -z 1887073 ']' 00:06:21.312 02:54:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.312 02:54:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:21.312 02:54:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.312 02:54:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:21.312 02:54:16 -- common/autotest_common.sh@10 -- # set +x 00:06:21.571 [2024-07-14 02:54:16.596678] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:21.571 [2024-07-14 02:54:16.596771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887073 ] 00:06:21.571 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.571 [2024-07-14 02:54:16.658021] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.571 [2024-07-14 02:54:16.658066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.571 [2024-07-14 02:54:16.745344] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:21.571 [2024-07-14 02:54:16.745533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.506 02:54:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:22.506 02:54:17 -- common/autotest_common.sh@852 -- # return 0 00:06:22.506 02:54:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1887212 00:06:22.506 02:54:17 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:22.506 02:54:17 -- event/cpu_locks.sh@103 -- # waitforlisten 1887212 /var/tmp/spdk2.sock 00:06:22.506 02:54:17 -- common/autotest_common.sh@819 -- # '[' -z 1887212 ']' 00:06:22.506 02:54:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.506 02:54:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:22.506 02:54:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.506 02:54:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:22.506 02:54:17 -- common/autotest_common.sh@10 -- # set +x 00:06:22.506 [2024-07-14 02:54:17.574440] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:22.506 [2024-07-14 02:54:17.574533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887212 ] 00:06:22.506 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.506 [2024-07-14 02:54:17.669424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.764 [2024-07-14 02:54:17.848170] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:22.764 [2024-07-14 02:54:17.848366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.330 02:54:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.330 02:54:18 -- common/autotest_common.sh@852 -- # return 0 00:06:23.330 02:54:18 -- event/cpu_locks.sh@105 -- # locks_exist 1887212 00:06:23.330 02:54:18 -- event/cpu_locks.sh@22 -- # lslocks -p 1887212 00:06:23.330 02:54:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.896 lslocks: write error 00:06:23.896 02:54:18 -- event/cpu_locks.sh@107 -- # killprocess 1887073 00:06:23.896 02:54:18 -- common/autotest_common.sh@926 -- # '[' -z 1887073 ']' 00:06:23.896 02:54:18 -- common/autotest_common.sh@930 -- # kill -0 1887073 00:06:23.896 02:54:18 -- common/autotest_common.sh@931 -- # uname 00:06:23.896 02:54:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.896 02:54:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1887073 00:06:23.896 02:54:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.896 02:54:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.896 02:54:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1887073' 00:06:23.896 killing process with pid 1887073 00:06:23.896 02:54:18 -- common/autotest_common.sh@945 -- # kill 1887073 00:06:23.896 02:54:18 -- common/autotest_common.sh@950 -- # wait 1887073 00:06:24.835 02:54:19 -- event/cpu_locks.sh@108 -- # killprocess 1887212 00:06:24.835 02:54:19 -- common/autotest_common.sh@926 -- # '[' -z 1887212 ']' 00:06:24.835 02:54:19 -- common/autotest_common.sh@930 -- # kill -0 1887212 00:06:24.835 02:54:19 -- common/autotest_common.sh@931 -- # uname 00:06:24.835 02:54:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.835 02:54:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1887212 00:06:24.835 02:54:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.835 02:54:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.835 02:54:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1887212' 00:06:24.835 killing process with pid 1887212 00:06:24.835 02:54:19 -- common/autotest_common.sh@945 -- # kill 1887212 00:06:24.835 02:54:19 -- common/autotest_common.sh@950 -- # wait 1887212 00:06:25.093 00:06:25.094 real 0m3.659s 00:06:25.094 user 0m3.948s 00:06:25.094 sys 0m1.088s 00:06:25.094 02:54:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.094 02:54:20 -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 END TEST locking_app_on_unlocked_coremask 00:06:25.094 ************************************ 00:06:25.094 02:54:20 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:25.094 02:54:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:25.094 02:54:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.094 02:54:20 -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 ************************************ 00:06:25.094 START TEST locking_app_on_locked_coremask 00:06:25.094 ************************************ 00:06:25.094 02:54:20 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:25.094 02:54:20 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1887525 00:06:25.094 02:54:20 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:25.094 02:54:20 -- event/cpu_locks.sh@116 -- # waitforlisten 1887525 /var/tmp/spdk.sock 00:06:25.094 02:54:20 -- common/autotest_common.sh@819 -- # '[' -z 1887525 ']' 00:06:25.094 02:54:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.094 02:54:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:25.094 02:54:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.094 02:54:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:25.094 02:54:20 -- common/autotest_common.sh@10 -- # set +x 00:06:25.094 [2024-07-14 02:54:20.286457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:25.094 [2024-07-14 02:54:20.286549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887525 ] 00:06:25.094 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.353 [2024-07-14 02:54:20.347976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.353 [2024-07-14 02:54:20.433106] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:25.353 [2024-07-14 02:54:20.433289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.290 02:54:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.290 02:54:21 -- common/autotest_common.sh@852 -- # return 0 00:06:26.290 02:54:21 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1887666 00:06:26.290 02:54:21 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.290 02:54:21 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1887666 /var/tmp/spdk2.sock 00:06:26.290 02:54:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.290 02:54:21 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1887666 /var/tmp/spdk2.sock 00:06:26.290 02:54:21 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:26.290 02:54:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.290 02:54:21 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:26.290 02:54:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.290 02:54:21 -- common/autotest_common.sh@643 -- # waitforlisten 1887666 /var/tmp/spdk2.sock 00:06:26.290 02:54:21 -- common/autotest_common.sh@819 -- # '[' -z 1887666 ']' 00:06:26.290 02:54:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.290 02:54:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.290 02:54:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.290 02:54:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.290 02:54:21 -- common/autotest_common.sh@10 -- # set +x 00:06:26.290 [2024-07-14 02:54:21.277700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:26.290 [2024-07-14 02:54:21.277780] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887666 ] 00:06:26.290 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.290 [2024-07-14 02:54:21.371933] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1887525 has claimed it. 00:06:26.290 [2024-07-14 02:54:21.371993] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1887666) - No such process 00:06:26.887 ERROR: process (pid: 1887666) is no longer running 00:06:26.887 02:54:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.887 02:54:21 -- common/autotest_common.sh@852 -- # return 1 00:06:26.887 02:54:21 -- common/autotest_common.sh@643 -- # es=1 00:06:26.887 02:54:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.887 02:54:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.887 02:54:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.887 02:54:21 -- event/cpu_locks.sh@122 -- # locks_exist 1887525 00:06:26.887 02:54:21 -- event/cpu_locks.sh@22 -- # lslocks -p 1887525 00:06:26.887 02:54:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.148 lslocks: write error 00:06:27.149 02:54:22 -- event/cpu_locks.sh@124 -- # killprocess 1887525 00:06:27.149 02:54:22 -- common/autotest_common.sh@926 -- # '[' -z 1887525 ']' 00:06:27.149 02:54:22 -- common/autotest_common.sh@930 -- # kill -0 1887525 00:06:27.149 02:54:22 -- common/autotest_common.sh@931 -- # uname 00:06:27.149 02:54:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.149 02:54:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1887525 00:06:27.149 02:54:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.149 02:54:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.149 02:54:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1887525' 00:06:27.149 killing process with pid 1887525 00:06:27.149 02:54:22 -- common/autotest_common.sh@945 -- # kill 1887525 00:06:27.149 02:54:22 -- common/autotest_common.sh@950 -- # wait 1887525 00:06:27.716 00:06:27.716 real 0m2.430s 00:06:27.716 user 0m2.756s 00:06:27.716 sys 0m0.655s 00:06:27.716 02:54:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.717 02:54:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 ************************************ 00:06:27.717 END TEST locking_app_on_locked_coremask 00:06:27.717 ************************************ 00:06:27.717 02:54:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:27.717 02:54:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:27.717 02:54:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.717 02:54:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 ************************************ 00:06:27.717 START TEST locking_overlapped_coremask 00:06:27.717 ************************************ 00:06:27.717 02:54:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:27.717 02:54:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1887834 00:06:27.717 02:54:22 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:27.717 02:54:22 -- event/cpu_locks.sh@133 -- # waitforlisten 1887834 /var/tmp/spdk.sock 00:06:27.717 02:54:22 -- common/autotest_common.sh@819 -- # '[' -z 1887834 ']' 00:06:27.717 02:54:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.717 02:54:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.717 02:54:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.717 02:54:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.717 02:54:22 -- common/autotest_common.sh@10 -- # set +x 00:06:27.717 [2024-07-14 02:54:22.743890] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:27.717 [2024-07-14 02:54:22.743984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887834 ] 00:06:27.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.717 [2024-07-14 02:54:22.809858] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:27.717 [2024-07-14 02:54:22.904514] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.717 [2024-07-14 02:54:22.904725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.717 [2024-07-14 02:54:22.904776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.717 [2024-07-14 02:54:22.904794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.651 02:54:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.651 02:54:23 -- common/autotest_common.sh@852 -- # return 0 00:06:28.651 02:54:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1887976 00:06:28.651 02:54:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1887976 /var/tmp/spdk2.sock 00:06:28.651 02:54:23 -- common/autotest_common.sh@640 -- # local es=0 00:06:28.651 02:54:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1887976 /var/tmp/spdk2.sock 00:06:28.651 02:54:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:28.651 02:54:23 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:28.651 02:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.651 02:54:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:28.651 02:54:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:28.651 02:54:23 -- common/autotest_common.sh@643 -- # waitforlisten 1887976 /var/tmp/spdk2.sock 00:06:28.651 02:54:23 -- common/autotest_common.sh@819 -- # '[' -z 1887976 ']' 00:06:28.651 02:54:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.651 02:54:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.651 02:54:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.651 02:54:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.651 02:54:23 -- common/autotest_common.sh@10 -- # set +x 00:06:28.651 [2024-07-14 02:54:23.739637] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:28.651 [2024-07-14 02:54:23.739714] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1887976 ] 00:06:28.651 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.651 [2024-07-14 02:54:23.828552] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1887834 has claimed it. 00:06:28.651 [2024-07-14 02:54:23.828613] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:29.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1887976) - No such process 00:06:29.217 ERROR: process (pid: 1887976) is no longer running 00:06:29.217 02:54:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.217 02:54:24 -- common/autotest_common.sh@852 -- # return 1 00:06:29.217 02:54:24 -- common/autotest_common.sh@643 -- # es=1 00:06:29.217 02:54:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:29.217 02:54:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:29.217 02:54:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:29.217 02:54:24 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:29.217 02:54:24 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:29.217 02:54:24 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:29.217 02:54:24 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:29.217 02:54:24 -- event/cpu_locks.sh@141 -- # killprocess 1887834 00:06:29.217 02:54:24 -- common/autotest_common.sh@926 -- # '[' -z 1887834 ']' 00:06:29.217 02:54:24 -- common/autotest_common.sh@930 -- # kill -0 1887834 00:06:29.217 02:54:24 -- common/autotest_common.sh@931 -- # uname 00:06:29.217 02:54:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.217 02:54:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1887834 00:06:29.217 02:54:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.217 02:54:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.217 02:54:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1887834' 00:06:29.217 killing process with pid 1887834 00:06:29.217 02:54:24 -- common/autotest_common.sh@945 -- # kill 1887834 00:06:29.217 02:54:24 -- common/autotest_common.sh@950 -- # wait 1887834 00:06:29.788 00:06:29.788 real 0m2.171s 00:06:29.788 user 0m6.210s 00:06:29.788 sys 0m0.469s 00:06:29.788 02:54:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.788 02:54:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.788 ************************************ 00:06:29.788 END TEST locking_overlapped_coremask 00:06:29.788 ************************************ 00:06:29.788 02:54:24 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:29.788 02:54:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:29.788 02:54:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.788 02:54:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.788 ************************************ 00:06:29.788 START TEST locking_overlapped_coremask_via_rpc 00:06:29.788 ************************************ 00:06:29.788 02:54:24 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:29.788 02:54:24 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1888146 00:06:29.788 02:54:24 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:29.788 02:54:24 -- event/cpu_locks.sh@149 -- # waitforlisten 1888146 /var/tmp/spdk.sock 00:06:29.788 02:54:24 -- common/autotest_common.sh@819 -- # '[' -z 1888146 ']' 00:06:29.788 02:54:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.788 02:54:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.788 02:54:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.788 02:54:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.788 02:54:24 -- common/autotest_common.sh@10 -- # set +x 00:06:29.788 [2024-07-14 02:54:24.948844] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:29.788 [2024-07-14 02:54:24.948947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888146 ] 00:06:29.788 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.788 [2024-07-14 02:54:25.015533] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.788 [2024-07-14 02:54:25.015577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.046 [2024-07-14 02:54:25.109758] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.046 [2024-07-14 02:54:25.109984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.046 [2024-07-14 02:54:25.110041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.046 [2024-07-14 02:54:25.110044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.983 02:54:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.983 02:54:25 -- common/autotest_common.sh@852 -- # return 0 00:06:30.983 02:54:25 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1888290 00:06:30.983 02:54:25 -- event/cpu_locks.sh@153 -- # waitforlisten 1888290 /var/tmp/spdk2.sock 00:06:30.983 02:54:25 -- common/autotest_common.sh@819 -- # '[' -z 1888290 ']' 00:06:30.983 02:54:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:30.983 02:54:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.983 02:54:25 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:30.983 02:54:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:30.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:30.983 02:54:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.983 02:54:25 -- common/autotest_common.sh@10 -- # set +x 00:06:30.983 [2024-07-14 02:54:25.954923] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:30.983 [2024-07-14 02:54:25.954998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888290 ] 00:06:30.983 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.983 [2024-07-14 02:54:26.043057] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.983 [2024-07-14 02:54:26.043091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.983 [2024-07-14 02:54:26.214032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.983 [2024-07-14 02:54:26.214282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.984 [2024-07-14 02:54:26.217963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:30.984 [2024-07-14 02:54:26.217965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.918 02:54:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.918 02:54:26 -- common/autotest_common.sh@852 -- # return 0 00:06:31.918 02:54:26 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:31.918 02:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.918 02:54:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 02:54:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.918 02:54:26 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.918 02:54:26 -- common/autotest_common.sh@640 -- # local es=0 00:06:31.918 02:54:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.918 02:54:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:31.918 02:54:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.918 02:54:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:31.918 02:54:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:31.918 02:54:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:31.918 02:54:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.918 02:54:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 [2024-07-14 02:54:26.877961] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1888146 has claimed it. 00:06:31.918 request: 00:06:31.918 { 00:06:31.918 "method": "framework_enable_cpumask_locks", 00:06:31.918 "req_id": 1 00:06:31.918 } 00:06:31.918 Got JSON-RPC error response 00:06:31.918 response: 00:06:31.918 { 00:06:31.918 "code": -32603, 00:06:31.918 "message": "Failed to claim CPU core: 2" 00:06:31.918 } 00:06:31.918 02:54:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:31.918 02:54:26 -- common/autotest_common.sh@643 -- # es=1 00:06:31.918 02:54:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:31.918 02:54:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:31.918 02:54:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:31.918 02:54:26 -- event/cpu_locks.sh@158 -- # waitforlisten 1888146 /var/tmp/spdk.sock 00:06:31.918 02:54:26 -- common/autotest_common.sh@819 -- # '[' -z 1888146 ']' 00:06:31.918 02:54:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.918 02:54:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.918 02:54:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.918 02:54:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.918 02:54:26 -- common/autotest_common.sh@10 -- # set +x 00:06:31.918 02:54:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.918 02:54:27 -- common/autotest_common.sh@852 -- # return 0 00:06:31.918 02:54:27 -- event/cpu_locks.sh@159 -- # waitforlisten 1888290 /var/tmp/spdk2.sock 00:06:31.918 02:54:27 -- common/autotest_common.sh@819 -- # '[' -z 1888290 ']' 00:06:31.918 02:54:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.918 02:54:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.918 02:54:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.918 02:54:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.918 02:54:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.178 02:54:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:32.178 02:54:27 -- common/autotest_common.sh@852 -- # return 0 00:06:32.178 02:54:27 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:32.178 02:54:27 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.178 02:54:27 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.178 02:54:27 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.178 00:06:32.178 real 0m2.464s 00:06:32.178 user 0m1.173s 00:06:32.178 sys 0m0.213s 00:06:32.178 02:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.178 02:54:27 -- common/autotest_common.sh@10 -- # set +x 00:06:32.178 ************************************ 00:06:32.178 END TEST locking_overlapped_coremask_via_rpc 00:06:32.178 ************************************ 00:06:32.178 02:54:27 -- event/cpu_locks.sh@174 -- # cleanup 00:06:32.178 02:54:27 -- event/cpu_locks.sh@15 -- # [[ -z 1888146 ]] 00:06:32.178 02:54:27 -- event/cpu_locks.sh@15 -- # killprocess 1888146 00:06:32.178 02:54:27 -- common/autotest_common.sh@926 -- # '[' -z 1888146 ']' 00:06:32.178 02:54:27 -- common/autotest_common.sh@930 -- # kill -0 1888146 00:06:32.178 02:54:27 -- common/autotest_common.sh@931 -- # uname 00:06:32.178 02:54:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.178 02:54:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1888146 00:06:32.178 02:54:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.178 02:54:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.178 02:54:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1888146' 00:06:32.178 killing process with pid 1888146 00:06:32.178 02:54:27 -- common/autotest_common.sh@945 -- # kill 1888146 00:06:32.178 02:54:27 -- common/autotest_common.sh@950 -- # wait 1888146 00:06:32.745 02:54:27 -- event/cpu_locks.sh@16 -- # [[ -z 1888290 ]] 00:06:32.745 02:54:27 -- event/cpu_locks.sh@16 -- # killprocess 1888290 00:06:32.745 02:54:27 -- common/autotest_common.sh@926 -- # '[' -z 1888290 ']' 00:06:32.745 02:54:27 -- common/autotest_common.sh@930 -- # kill -0 1888290 00:06:32.745 02:54:27 -- common/autotest_common.sh@931 -- # uname 00:06:32.745 02:54:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.745 02:54:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1888290 00:06:32.745 02:54:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:32.745 02:54:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:32.745 02:54:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1888290' 00:06:32.745 killing process with pid 1888290 00:06:32.745 02:54:27 -- common/autotest_common.sh@945 -- # kill 1888290 00:06:32.745 02:54:27 -- common/autotest_common.sh@950 -- # wait 1888290 00:06:33.004 02:54:28 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.004 02:54:28 -- event/cpu_locks.sh@1 -- # cleanup 00:06:33.004 02:54:28 -- event/cpu_locks.sh@15 -- # [[ -z 1888146 ]] 00:06:33.004 02:54:28 -- event/cpu_locks.sh@15 -- # killprocess 1888146 00:06:33.004 02:54:28 -- common/autotest_common.sh@926 -- # '[' -z 1888146 ']' 00:06:33.004 02:54:28 -- common/autotest_common.sh@930 -- # kill -0 1888146 00:06:33.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1888146) - No such process 00:06:33.004 02:54:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1888146 is not found' 00:06:33.004 Process with pid 1888146 is not found 00:06:33.004 02:54:28 -- event/cpu_locks.sh@16 -- # [[ -z 1888290 ]] 00:06:33.004 02:54:28 -- event/cpu_locks.sh@16 -- # killprocess 1888290 00:06:33.004 02:54:28 -- common/autotest_common.sh@926 -- # '[' -z 1888290 ']' 00:06:33.004 02:54:28 -- common/autotest_common.sh@930 -- # kill -0 1888290 00:06:33.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (1888290) - No such process 00:06:33.004 02:54:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 1888290 is not found' 00:06:33.004 Process with pid 1888290 is not found 00:06:33.004 02:54:28 -- event/cpu_locks.sh@18 -- # rm -f 00:06:33.004 00:06:33.004 real 0m19.292s 00:06:33.004 user 0m34.243s 00:06:33.004 sys 0m5.486s 00:06:33.004 02:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.004 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.004 ************************************ 00:06:33.004 END TEST cpu_locks 00:06:33.004 ************************************ 00:06:33.264 00:06:33.264 real 0m45.372s 00:06:33.264 user 1m26.468s 00:06:33.264 sys 0m9.442s 00:06:33.264 02:54:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.264 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.264 ************************************ 00:06:33.264 END TEST event 00:06:33.264 ************************************ 00:06:33.264 02:54:28 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.264 02:54:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.264 02:54:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.264 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.264 ************************************ 00:06:33.264 START TEST thread 00:06:33.264 ************************************ 00:06:33.264 02:54:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:33.264 * Looking for test storage... 00:06:33.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:33.264 02:54:28 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.264 02:54:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:33.264 02:54:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.264 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:06:33.264 ************************************ 00:06:33.264 START TEST thread_poller_perf 00:06:33.264 ************************************ 00:06:33.264 02:54:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:33.264 [2024-07-14 02:54:28.350124] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:33.264 [2024-07-14 02:54:28.350222] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888654 ] 00:06:33.264 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.264 [2024-07-14 02:54:28.408900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.264 [2024-07-14 02:54:28.494862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.264 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:34.640 ====================================== 00:06:34.640 busy:2709148456 (cyc) 00:06:34.640 total_run_count: 281000 00:06:34.640 tsc_hz: 2700000000 (cyc) 00:06:34.640 ====================================== 00:06:34.640 poller_cost: 9641 (cyc), 3570 (nsec) 00:06:34.640 00:06:34.640 real 0m1.248s 00:06:34.640 user 0m1.159s 00:06:34.640 sys 0m0.082s 00:06:34.640 02:54:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.640 02:54:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.640 ************************************ 00:06:34.640 END TEST thread_poller_perf 00:06:34.640 ************************************ 00:06:34.640 02:54:29 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.640 02:54:29 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:34.640 02:54:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:34.640 02:54:29 -- common/autotest_common.sh@10 -- # set +x 00:06:34.640 ************************************ 00:06:34.640 START TEST thread_poller_perf 00:06:34.640 ************************************ 00:06:34.640 02:54:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:34.640 [2024-07-14 02:54:29.626351] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:34.640 [2024-07-14 02:54:29.626438] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1888810 ] 00:06:34.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.641 [2024-07-14 02:54:29.688463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.641 [2024-07-14 02:54:29.781378] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.641 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:36.020 ====================================== 00:06:36.020 busy:2703435820 (cyc) 00:06:36.020 total_run_count: 3821000 00:06:36.020 tsc_hz: 2700000000 (cyc) 00:06:36.020 ====================================== 00:06:36.020 poller_cost: 707 (cyc), 261 (nsec) 00:06:36.020 00:06:36.020 real 0m1.250s 00:06:36.020 user 0m1.161s 00:06:36.020 sys 0m0.082s 00:06:36.020 02:54:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.020 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:06:36.020 ************************************ 00:06:36.020 END TEST thread_poller_perf 00:06:36.020 ************************************ 00:06:36.020 02:54:30 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:36.020 00:06:36.020 real 0m2.603s 00:06:36.020 user 0m2.357s 00:06:36.020 sys 0m0.246s 00:06:36.020 02:54:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.020 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:06:36.020 ************************************ 00:06:36.020 END TEST thread 00:06:36.020 ************************************ 00:06:36.020 02:54:30 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.020 02:54:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.020 02:54:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.020 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:06:36.020 ************************************ 00:06:36.020 START TEST accel 00:06:36.020 ************************************ 00:06:36.020 02:54:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:36.020 * Looking for test storage... 00:06:36.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:36.020 02:54:30 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:36.020 02:54:30 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:36.020 02:54:30 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.020 02:54:30 -- accel/accel.sh@59 -- # spdk_tgt_pid=1889089 00:06:36.020 02:54:30 -- accel/accel.sh@60 -- # waitforlisten 1889089 00:06:36.020 02:54:30 -- common/autotest_common.sh@819 -- # '[' -z 1889089 ']' 00:06:36.020 02:54:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.020 02:54:30 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:36.020 02:54:30 -- accel/accel.sh@58 -- # build_accel_config 00:06:36.020 02:54:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.020 02:54:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.020 02:54:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.020 02:54:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.020 02:54:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.020 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:06:36.020 02:54:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.020 02:54:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.020 02:54:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.020 02:54:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.020 02:54:30 -- accel/accel.sh@42 -- # jq -r . 00:06:36.020 [2024-07-14 02:54:31.016717] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:36.020 [2024-07-14 02:54:31.016801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889089 ] 00:06:36.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.020 [2024-07-14 02:54:31.078458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.020 [2024-07-14 02:54:31.169835] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.020 [2024-07-14 02:54:31.170010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.956 02:54:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.956 02:54:31 -- common/autotest_common.sh@852 -- # return 0 00:06:36.956 02:54:31 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:36.956 02:54:31 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:36.956 02:54:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:36.956 02:54:31 -- common/autotest_common.sh@10 -- # set +x 00:06:36.956 02:54:31 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:36.956 02:54:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:36.956 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.956 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.956 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.956 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.956 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.956 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.956 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.956 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.956 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # IFS== 00:06:36.957 02:54:31 -- accel/accel.sh@64 -- # read -r opc module 00:06:36.957 02:54:31 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:36.957 02:54:31 -- accel/accel.sh@67 -- # killprocess 1889089 00:06:36.957 02:54:31 -- common/autotest_common.sh@926 -- # '[' -z 1889089 ']' 00:06:36.957 02:54:31 -- common/autotest_common.sh@930 -- # kill -0 1889089 00:06:36.957 02:54:31 -- common/autotest_common.sh@931 -- # uname 00:06:36.957 02:54:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.957 02:54:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1889089 00:06:36.957 02:54:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.957 02:54:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.957 02:54:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1889089' 00:06:36.957 killing process with pid 1889089 00:06:36.957 02:54:32 -- common/autotest_common.sh@945 -- # kill 1889089 00:06:36.957 02:54:32 -- common/autotest_common.sh@950 -- # wait 1889089 00:06:37.216 02:54:32 -- accel/accel.sh@68 -- # trap - ERR 00:06:37.216 02:54:32 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:37.216 02:54:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:37.216 02:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.216 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.216 02:54:32 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:37.216 02:54:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:37.216 02:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.216 02:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.216 02:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.216 02:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.216 02:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.216 02:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.216 02:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.216 02:54:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.216 02:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.216 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.216 02:54:32 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:37.216 02:54:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:37.216 02:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.217 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.217 ************************************ 00:06:37.217 START TEST accel_missing_filename 00:06:37.217 ************************************ 00:06:37.217 02:54:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:37.217 02:54:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.217 02:54:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:37.217 02:54:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.217 02:54:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.217 02:54:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.217 02:54:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.217 02:54:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:37.217 02:54:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:37.217 02:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.217 02:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.217 02:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.217 02:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.217 02:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.217 02:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.217 02:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.217 02:54:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.477 [2024-07-14 02:54:32.478607] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:37.477 [2024-07-14 02:54:32.478689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889310 ] 00:06:37.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.477 [2024-07-14 02:54:32.540572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.477 [2024-07-14 02:54:32.631475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.477 [2024-07-14 02:54:32.693282] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.737 [2024-07-14 02:54:32.779545] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:37.737 A filename is required. 00:06:37.737 02:54:32 -- common/autotest_common.sh@643 -- # es=234 00:06:37.737 02:54:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.737 02:54:32 -- common/autotest_common.sh@652 -- # es=106 00:06:37.737 02:54:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:37.737 02:54:32 -- common/autotest_common.sh@660 -- # es=1 00:06:37.737 02:54:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.737 00:06:37.737 real 0m0.402s 00:06:37.737 user 0m0.289s 00:06:37.737 sys 0m0.147s 00:06:37.737 02:54:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.737 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.737 ************************************ 00:06:37.737 END TEST accel_missing_filename 00:06:37.737 ************************************ 00:06:37.737 02:54:32 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.737 02:54:32 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:37.737 02:54:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.737 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:06:37.737 ************************************ 00:06:37.737 START TEST accel_compress_verify 00:06:37.737 ************************************ 00:06:37.737 02:54:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.737 02:54:32 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.737 02:54:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.737 02:54:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:37.737 02:54:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.737 02:54:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:37.737 02:54:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.737 02:54:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.737 02:54:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:37.737 02:54:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.737 02:54:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.737 02:54:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.737 02:54:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.737 02:54:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.737 02:54:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.737 02:54:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.737 02:54:32 -- accel/accel.sh@42 -- # jq -r . 00:06:37.737 [2024-07-14 02:54:32.902378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:37.737 [2024-07-14 02:54:32.902445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889335 ] 00:06:37.737 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.737 [2024-07-14 02:54:32.962398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.996 [2024-07-14 02:54:33.054615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.996 [2024-07-14 02:54:33.116334] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:37.996 [2024-07-14 02:54:33.195610] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:38.257 00:06:38.257 Compression does not support the verify option, aborting. 00:06:38.257 02:54:33 -- common/autotest_common.sh@643 -- # es=161 00:06:38.257 02:54:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.257 02:54:33 -- common/autotest_common.sh@652 -- # es=33 00:06:38.257 02:54:33 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:38.257 02:54:33 -- common/autotest_common.sh@660 -- # es=1 00:06:38.257 02:54:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.257 00:06:38.257 real 0m0.388s 00:06:38.257 user 0m0.279s 00:06:38.257 sys 0m0.139s 00:06:38.258 02:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 ************************************ 00:06:38.258 END TEST accel_compress_verify 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.258 02:54:33 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:38.258 02:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 ************************************ 00:06:38.258 START TEST accel_wrong_workload 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:38.258 02:54:33 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.258 02:54:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:38.258 02:54:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.258 02:54:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.258 02:54:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.258 02:54:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.258 02:54:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.258 02:54:33 -- accel/accel.sh@42 -- # jq -r . 00:06:38.258 Unsupported workload type: foobar 00:06:38.258 [2024-07-14 02:54:33.316850] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:38.258 accel_perf options: 00:06:38.258 [-h help message] 00:06:38.258 [-q queue depth per core] 00:06:38.258 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.258 [-T number of threads per core 00:06:38.258 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.258 [-t time in seconds] 00:06:38.258 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.258 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.258 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.258 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.258 [-S for crc32c workload, use this seed value (default 0) 00:06:38.258 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.258 [-f for fill workload, use this BYTE value (default 255) 00:06:38.258 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.258 [-y verify result if this switch is on] 00:06:38.258 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.258 Can be used to spread operations across a wider range of memory. 00:06:38.258 02:54:33 -- common/autotest_common.sh@643 -- # es=1 00:06:38.258 02:54:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.258 02:54:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.258 02:54:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.258 00:06:38.258 real 0m0.022s 00:06:38.258 user 0m0.012s 00:06:38.258 sys 0m0.010s 00:06:38.258 02:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 ************************************ 00:06:38.258 END TEST accel_wrong_workload 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.258 02:54:33 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:38.258 02:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 Error: writing output failed: Broken pipe 00:06:38.258 ************************************ 00:06:38.258 START TEST accel_negative_buffers 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:38.258 02:54:33 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.258 02:54:33 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:38.258 02:54:33 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:38.258 02:54:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.258 02:54:33 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.258 02:54:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.258 02:54:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.258 02:54:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.258 02:54:33 -- accel/accel.sh@42 -- # jq -r . 00:06:38.258 -x option must be non-negative. 00:06:38.258 [2024-07-14 02:54:33.357413] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:38.258 accel_perf options: 00:06:38.258 [-h help message] 00:06:38.258 [-q queue depth per core] 00:06:38.258 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:38.258 [-T number of threads per core 00:06:38.258 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:38.258 [-t time in seconds] 00:06:38.258 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:38.258 [ dif_verify, , dif_generate, dif_generate_copy 00:06:38.258 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:38.258 [-l for compress/decompress workloads, name of uncompressed input file 00:06:38.258 [-S for crc32c workload, use this seed value (default 0) 00:06:38.258 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:38.258 [-f for fill workload, use this BYTE value (default 255) 00:06:38.258 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:38.258 [-y verify result if this switch is on] 00:06:38.258 [-a tasks to allocate per core (default: same value as -q)] 00:06:38.258 Can be used to spread operations across a wider range of memory. 00:06:38.258 02:54:33 -- common/autotest_common.sh@643 -- # es=1 00:06:38.258 02:54:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:38.258 02:54:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:38.258 02:54:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:38.258 00:06:38.258 real 0m0.020s 00:06:38.258 user 0m0.010s 00:06:38.258 sys 0m0.010s 00:06:38.258 02:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 ************************************ 00:06:38.258 END TEST accel_negative_buffers 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:38.258 02:54:33 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:38.258 02:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.258 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:06:38.258 Error: writing output failed: Broken pipe 00:06:38.258 ************************************ 00:06:38.258 START TEST accel_crc32c 00:06:38.258 ************************************ 00:06:38.258 02:54:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:38.258 02:54:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.258 02:54:33 -- accel/accel.sh@17 -- # local accel_module 00:06:38.258 02:54:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:38.258 02:54:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.258 02:54:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.258 02:54:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.258 02:54:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.258 02:54:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.258 02:54:33 -- accel/accel.sh@42 -- # jq -r . 00:06:38.258 [2024-07-14 02:54:33.398084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:38.258 [2024-07-14 02:54:33.398142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889516 ] 00:06:38.258 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.258 [2024-07-14 02:54:33.460172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.519 [2024-07-14 02:54:33.552189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.901 02:54:34 -- accel/accel.sh@18 -- # out=' 00:06:39.901 SPDK Configuration: 00:06:39.901 Core mask: 0x1 00:06:39.901 00:06:39.901 Accel Perf Configuration: 00:06:39.901 Workload Type: crc32c 00:06:39.901 CRC-32C seed: 32 00:06:39.901 Transfer size: 4096 bytes 00:06:39.901 Vector count 1 00:06:39.901 Module: software 00:06:39.901 Queue depth: 32 00:06:39.901 Allocate depth: 32 00:06:39.901 # threads/core: 1 00:06:39.901 Run time: 1 seconds 00:06:39.901 Verify: Yes 00:06:39.901 00:06:39.901 Running for 1 seconds... 00:06:39.901 00:06:39.901 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:39.901 ------------------------------------------------------------------------------------ 00:06:39.901 0,0 403904/s 1577 MiB/s 0 0 00:06:39.901 ==================================================================================== 00:06:39.901 Total 403904/s 1577 MiB/s 0 0' 00:06:39.901 02:54:34 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:34 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.901 02:54:34 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.901 02:54:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.901 02:54:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:39.901 02:54:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.901 02:54:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.901 02:54:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:39.901 02:54:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:39.901 02:54:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:39.901 02:54:34 -- accel/accel.sh@42 -- # jq -r . 00:06:39.901 [2024-07-14 02:54:34.803572] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:39.901 [2024-07-14 02:54:34.803653] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889658 ] 00:06:39.901 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.901 [2024-07-14 02:54:34.865135] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.901 [2024-07-14 02:54:34.955564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.901 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.901 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val=0x1 00:06:39.901 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.901 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.901 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.901 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.901 02:54:35 -- accel/accel.sh@21 -- # val=crc32c 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=32 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=software 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=32 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=32 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=1 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val=Yes 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:39.902 02:54:35 -- accel/accel.sh@21 -- # val= 00:06:39.902 02:54:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # IFS=: 00:06:39.902 02:54:35 -- accel/accel.sh@20 -- # read -r var val 00:06:41.278 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.278 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.278 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.278 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.278 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.278 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.278 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.278 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.279 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.279 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.279 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.279 02:54:36 -- accel/accel.sh@21 -- # val= 00:06:41.279 02:54:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # IFS=: 00:06:41.279 02:54:36 -- accel/accel.sh@20 -- # read -r var val 00:06:41.279 02:54:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.279 02:54:36 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:41.279 02:54:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.279 00:06:41.279 real 0m2.813s 00:06:41.279 user 0m2.521s 00:06:41.279 sys 0m0.285s 00:06:41.279 02:54:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.279 02:54:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.279 ************************************ 00:06:41.279 END TEST accel_crc32c 00:06:41.279 ************************************ 00:06:41.279 02:54:36 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:41.279 02:54:36 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:41.279 02:54:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.279 02:54:36 -- common/autotest_common.sh@10 -- # set +x 00:06:41.279 ************************************ 00:06:41.279 START TEST accel_crc32c_C2 00:06:41.279 ************************************ 00:06:41.279 02:54:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:41.279 02:54:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.279 02:54:36 -- accel/accel.sh@17 -- # local accel_module 00:06:41.279 02:54:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:41.279 02:54:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:41.279 02:54:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.279 02:54:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.279 02:54:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.279 02:54:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.279 02:54:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.279 02:54:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.279 02:54:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.279 02:54:36 -- accel/accel.sh@42 -- # jq -r . 00:06:41.279 [2024-07-14 02:54:36.242646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:41.279 [2024-07-14 02:54:36.242736] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889817 ] 00:06:41.279 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.279 [2024-07-14 02:54:36.305645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.279 [2024-07-14 02:54:36.394142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.704 02:54:37 -- accel/accel.sh@18 -- # out=' 00:06:42.704 SPDK Configuration: 00:06:42.704 Core mask: 0x1 00:06:42.704 00:06:42.704 Accel Perf Configuration: 00:06:42.704 Workload Type: crc32c 00:06:42.704 CRC-32C seed: 0 00:06:42.705 Transfer size: 4096 bytes 00:06:42.705 Vector count 2 00:06:42.705 Module: software 00:06:42.705 Queue depth: 32 00:06:42.705 Allocate depth: 32 00:06:42.705 # threads/core: 1 00:06:42.705 Run time: 1 seconds 00:06:42.705 Verify: Yes 00:06:42.705 00:06:42.705 Running for 1 seconds... 00:06:42.705 00:06:42.705 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.705 ------------------------------------------------------------------------------------ 00:06:42.705 0,0 321216/s 2509 MiB/s 0 0 00:06:42.705 ==================================================================================== 00:06:42.705 Total 321216/s 1254 MiB/s 0 0' 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:42.705 02:54:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.705 02:54:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.705 02:54:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.705 02:54:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.705 02:54:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.705 02:54:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.705 02:54:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.705 02:54:37 -- accel/accel.sh@42 -- # jq -r . 00:06:42.705 [2024-07-14 02:54:37.648586] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:42.705 [2024-07-14 02:54:37.648666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1889959 ] 00:06:42.705 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.705 [2024-07-14 02:54:37.711087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.705 [2024-07-14 02:54:37.802932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=0x1 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=crc32c 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=0 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=software 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@23 -- # accel_module=software 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=32 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=32 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=1 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val=Yes 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:42.705 02:54:37 -- accel/accel.sh@21 -- # val= 00:06:42.705 02:54:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # IFS=: 00:06:42.705 02:54:37 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@21 -- # val= 00:06:44.085 02:54:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # IFS=: 00:06:44.085 02:54:39 -- accel/accel.sh@20 -- # read -r var val 00:06:44.085 02:54:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.085 02:54:39 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:44.085 02:54:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.085 00:06:44.085 real 0m2.805s 00:06:44.085 user 0m2.510s 00:06:44.085 sys 0m0.288s 00:06:44.085 02:54:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.085 02:54:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.085 ************************************ 00:06:44.085 END TEST accel_crc32c_C2 00:06:44.085 ************************************ 00:06:44.085 02:54:39 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:44.085 02:54:39 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.085 02:54:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.085 02:54:39 -- common/autotest_common.sh@10 -- # set +x 00:06:44.085 ************************************ 00:06:44.085 START TEST accel_copy 00:06:44.085 ************************************ 00:06:44.085 02:54:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:44.085 02:54:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.085 02:54:39 -- accel/accel.sh@17 -- # local accel_module 00:06:44.085 02:54:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:44.085 02:54:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:44.085 02:54:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.085 02:54:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.085 02:54:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.085 02:54:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.085 02:54:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.085 02:54:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.085 02:54:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.085 02:54:39 -- accel/accel.sh@42 -- # jq -r . 00:06:44.085 [2024-07-14 02:54:39.067806] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:44.085 [2024-07-14 02:54:39.067938] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890243 ] 00:06:44.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.085 [2024-07-14 02:54:39.128284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.085 [2024-07-14 02:54:39.218838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.466 02:54:40 -- accel/accel.sh@18 -- # out=' 00:06:45.466 SPDK Configuration: 00:06:45.466 Core mask: 0x1 00:06:45.466 00:06:45.466 Accel Perf Configuration: 00:06:45.466 Workload Type: copy 00:06:45.466 Transfer size: 4096 bytes 00:06:45.466 Vector count 1 00:06:45.466 Module: software 00:06:45.466 Queue depth: 32 00:06:45.466 Allocate depth: 32 00:06:45.466 # threads/core: 1 00:06:45.466 Run time: 1 seconds 00:06:45.466 Verify: Yes 00:06:45.466 00:06:45.466 Running for 1 seconds... 00:06:45.466 00:06:45.466 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:45.466 ------------------------------------------------------------------------------------ 00:06:45.466 0,0 278144/s 1086 MiB/s 0 0 00:06:45.466 ==================================================================================== 00:06:45.466 Total 278144/s 1086 MiB/s 0 0' 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:45.466 02:54:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.466 02:54:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.466 02:54:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.466 02:54:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.466 02:54:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.466 02:54:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.466 02:54:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.466 02:54:40 -- accel/accel.sh@42 -- # jq -r . 00:06:45.466 [2024-07-14 02:54:40.474952] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:45.466 [2024-07-14 02:54:40.475032] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890385 ] 00:06:45.466 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.466 [2024-07-14 02:54:40.535622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.466 [2024-07-14 02:54:40.627061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=0x1 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=copy 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=software 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=32 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=32 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.466 02:54:40 -- accel/accel.sh@21 -- # val=1 00:06:45.466 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.466 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.467 02:54:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.467 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.467 02:54:40 -- accel/accel.sh@21 -- # val=Yes 00:06:45.467 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.467 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.467 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:45.467 02:54:40 -- accel/accel.sh@21 -- # val= 00:06:45.467 02:54:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # IFS=: 00:06:45.467 02:54:40 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@21 -- # val= 00:06:46.847 02:54:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # IFS=: 00:06:46.847 02:54:41 -- accel/accel.sh@20 -- # read -r var val 00:06:46.847 02:54:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.847 02:54:41 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:46.847 02:54:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.847 00:06:46.847 real 0m2.816s 00:06:46.847 user 0m2.509s 00:06:46.847 sys 0m0.299s 00:06:46.847 02:54:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.847 02:54:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.847 ************************************ 00:06:46.847 END TEST accel_copy 00:06:46.847 ************************************ 00:06:46.847 02:54:41 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.847 02:54:41 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:46.847 02:54:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.847 02:54:41 -- common/autotest_common.sh@10 -- # set +x 00:06:46.847 ************************************ 00:06:46.847 START TEST accel_fill 00:06:46.847 ************************************ 00:06:46.847 02:54:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.847 02:54:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.847 02:54:41 -- accel/accel.sh@17 -- # local accel_module 00:06:46.847 02:54:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.847 02:54:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:46.847 02:54:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.848 02:54:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.848 02:54:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.848 02:54:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.848 02:54:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.848 02:54:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.848 02:54:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.848 02:54:41 -- accel/accel.sh@42 -- # jq -r . 00:06:46.848 [2024-07-14 02:54:41.909126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:46.848 [2024-07-14 02:54:41.909207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890540 ] 00:06:46.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.848 [2024-07-14 02:54:41.969457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.848 [2024-07-14 02:54:42.060287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.227 02:54:43 -- accel/accel.sh@18 -- # out=' 00:06:48.227 SPDK Configuration: 00:06:48.227 Core mask: 0x1 00:06:48.227 00:06:48.227 Accel Perf Configuration: 00:06:48.227 Workload Type: fill 00:06:48.227 Fill pattern: 0x80 00:06:48.227 Transfer size: 4096 bytes 00:06:48.227 Vector count 1 00:06:48.227 Module: software 00:06:48.227 Queue depth: 64 00:06:48.227 Allocate depth: 64 00:06:48.227 # threads/core: 1 00:06:48.227 Run time: 1 seconds 00:06:48.227 Verify: Yes 00:06:48.227 00:06:48.227 Running for 1 seconds... 00:06:48.227 00:06:48.227 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.227 ------------------------------------------------------------------------------------ 00:06:48.227 0,0 404416/s 1579 MiB/s 0 0 00:06:48.227 ==================================================================================== 00:06:48.227 Total 404416/s 1579 MiB/s 0 0' 00:06:48.227 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.227 02:54:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.227 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.227 02:54:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:48.227 02:54:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.227 02:54:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.227 02:54:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.227 02:54:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.227 02:54:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.227 02:54:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.227 02:54:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.227 02:54:43 -- accel/accel.sh@42 -- # jq -r . 00:06:48.227 [2024-07-14 02:54:43.312219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:48.227 [2024-07-14 02:54:43.312297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890684 ] 00:06:48.227 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.227 [2024-07-14 02:54:43.372291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.227 [2024-07-14 02:54:43.462181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=0x1 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=fill 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=0x80 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=software 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=64 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=64 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=1 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val=Yes 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:48.486 02:54:43 -- accel/accel.sh@21 -- # val= 00:06:48.486 02:54:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # IFS=: 00:06:48.486 02:54:43 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@21 -- # val= 00:06:49.865 02:54:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # IFS=: 00:06:49.865 02:54:44 -- accel/accel.sh@20 -- # read -r var val 00:06:49.865 02:54:44 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:49.865 02:54:44 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:49.865 02:54:44 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.865 00:06:49.865 real 0m2.792s 00:06:49.865 user 0m2.502s 00:06:49.865 sys 0m0.282s 00:06:49.865 02:54:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.865 02:54:44 -- common/autotest_common.sh@10 -- # set +x 00:06:49.865 ************************************ 00:06:49.865 END TEST accel_fill 00:06:49.865 ************************************ 00:06:49.865 02:54:44 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:49.865 02:54:44 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:49.865 02:54:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.865 02:54:44 -- common/autotest_common.sh@10 -- # set +x 00:06:49.865 ************************************ 00:06:49.865 START TEST accel_copy_crc32c 00:06:49.865 ************************************ 00:06:49.865 02:54:44 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:49.865 02:54:44 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.865 02:54:44 -- accel/accel.sh@17 -- # local accel_module 00:06:49.865 02:54:44 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:49.865 02:54:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:49.865 02:54:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.865 02:54:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.865 02:54:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.865 02:54:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.865 02:54:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.865 02:54:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.865 02:54:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.865 02:54:44 -- accel/accel.sh@42 -- # jq -r . 00:06:49.865 [2024-07-14 02:54:44.728069] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:49.865 [2024-07-14 02:54:44.728151] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1890963 ] 00:06:49.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.865 [2024-07-14 02:54:44.791175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.865 [2024-07-14 02:54:44.885632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.243 02:54:46 -- accel/accel.sh@18 -- # out=' 00:06:51.243 SPDK Configuration: 00:06:51.243 Core mask: 0x1 00:06:51.243 00:06:51.243 Accel Perf Configuration: 00:06:51.243 Workload Type: copy_crc32c 00:06:51.243 CRC-32C seed: 0 00:06:51.243 Vector size: 4096 bytes 00:06:51.243 Transfer size: 4096 bytes 00:06:51.243 Vector count 1 00:06:51.243 Module: software 00:06:51.243 Queue depth: 32 00:06:51.243 Allocate depth: 32 00:06:51.243 # threads/core: 1 00:06:51.243 Run time: 1 seconds 00:06:51.243 Verify: Yes 00:06:51.243 00:06:51.243 Running for 1 seconds... 00:06:51.243 00:06:51.243 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.243 ------------------------------------------------------------------------------------ 00:06:51.243 0,0 217344/s 849 MiB/s 0 0 00:06:51.243 ==================================================================================== 00:06:51.243 Total 217344/s 849 MiB/s 0 0' 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:51.243 02:54:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.243 02:54:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.243 02:54:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.243 02:54:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.243 02:54:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.243 02:54:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.243 02:54:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.243 02:54:46 -- accel/accel.sh@42 -- # jq -r . 00:06:51.243 [2024-07-14 02:54:46.139282] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:51.243 [2024-07-14 02:54:46.139366] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891112 ] 00:06:51.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.243 [2024-07-14 02:54:46.199462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.243 [2024-07-14 02:54:46.290368] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=0x1 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=0 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=software 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=32 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=32 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=1 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val=Yes 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:51.243 02:54:46 -- accel/accel.sh@21 -- # val= 00:06:51.243 02:54:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # IFS=: 00:06:51.243 02:54:46 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@21 -- # val= 00:06:52.622 02:54:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # IFS=: 00:06:52.622 02:54:47 -- accel/accel.sh@20 -- # read -r var val 00:06:52.622 02:54:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:52.622 02:54:47 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:52.622 02:54:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.622 00:06:52.622 real 0m2.820s 00:06:52.622 user 0m2.519s 00:06:52.622 sys 0m0.293s 00:06:52.622 02:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.622 02:54:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.622 ************************************ 00:06:52.622 END TEST accel_copy_crc32c 00:06:52.622 ************************************ 00:06:52.622 02:54:47 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.622 02:54:47 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:52.622 02:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.622 02:54:47 -- common/autotest_common.sh@10 -- # set +x 00:06:52.622 ************************************ 00:06:52.622 START TEST accel_copy_crc32c_C2 00:06:52.622 ************************************ 00:06:52.622 02:54:47 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:52.622 02:54:47 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.622 02:54:47 -- accel/accel.sh@17 -- # local accel_module 00:06:52.622 02:54:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:52.622 02:54:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:52.622 02:54:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.622 02:54:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.622 02:54:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.622 02:54:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.622 02:54:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.622 02:54:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.622 02:54:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.622 02:54:47 -- accel/accel.sh@42 -- # jq -r . 00:06:52.622 [2024-07-14 02:54:47.571982] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:52.622 [2024-07-14 02:54:47.572065] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891267 ] 00:06:52.622 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.622 [2024-07-14 02:54:47.633024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.622 [2024-07-14 02:54:47.724383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.000 02:54:48 -- accel/accel.sh@18 -- # out=' 00:06:54.000 SPDK Configuration: 00:06:54.000 Core mask: 0x1 00:06:54.000 00:06:54.000 Accel Perf Configuration: 00:06:54.000 Workload Type: copy_crc32c 00:06:54.000 CRC-32C seed: 0 00:06:54.000 Vector size: 4096 bytes 00:06:54.000 Transfer size: 8192 bytes 00:06:54.000 Vector count 2 00:06:54.000 Module: software 00:06:54.000 Queue depth: 32 00:06:54.000 Allocate depth: 32 00:06:54.000 # threads/core: 1 00:06:54.000 Run time: 1 seconds 00:06:54.000 Verify: Yes 00:06:54.000 00:06:54.000 Running for 1 seconds... 00:06:54.000 00:06:54.000 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.000 ------------------------------------------------------------------------------------ 00:06:54.000 0,0 153504/s 1199 MiB/s 0 0 00:06:54.000 ==================================================================================== 00:06:54.000 Total 153504/s 599 MiB/s 0 0' 00:06:54.000 02:54:48 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:54.000 02:54:48 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:54.000 02:54:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.000 02:54:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.000 02:54:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.000 02:54:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.000 02:54:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.000 02:54:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.000 02:54:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.000 02:54:48 -- accel/accel.sh@42 -- # jq -r . 00:06:54.000 [2024-07-14 02:54:48.979615] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:54.000 [2024-07-14 02:54:48.979695] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891405 ] 00:06:54.000 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.000 [2024-07-14 02:54:49.039702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.000 [2024-07-14 02:54:49.130088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=0x1 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=0 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=software 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=32 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=32 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=1 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val=Yes 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:54.000 02:54:49 -- accel/accel.sh@21 -- # val= 00:06:54.000 02:54:49 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # IFS=: 00:06:54.000 02:54:49 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@21 -- # val= 00:06:55.379 02:54:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # IFS=: 00:06:55.379 02:54:50 -- accel/accel.sh@20 -- # read -r var val 00:06:55.379 02:54:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.379 02:54:50 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:55.379 02:54:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.379 00:06:55.379 real 0m2.799s 00:06:55.379 user 0m2.503s 00:06:55.379 sys 0m0.288s 00:06:55.379 02:54:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.379 02:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:55.379 ************************************ 00:06:55.379 END TEST accel_copy_crc32c_C2 00:06:55.379 ************************************ 00:06:55.380 02:54:50 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:55.380 02:54:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:55.380 02:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.380 02:54:50 -- common/autotest_common.sh@10 -- # set +x 00:06:55.380 ************************************ 00:06:55.380 START TEST accel_dualcast 00:06:55.380 ************************************ 00:06:55.380 02:54:50 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:55.380 02:54:50 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.380 02:54:50 -- accel/accel.sh@17 -- # local accel_module 00:06:55.380 02:54:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:55.380 02:54:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:55.380 02:54:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.380 02:54:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.380 02:54:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.380 02:54:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.380 02:54:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.380 02:54:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.380 02:54:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.380 02:54:50 -- accel/accel.sh@42 -- # jq -r . 00:06:55.380 [2024-07-14 02:54:50.396109] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:55.380 [2024-07-14 02:54:50.396193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891689 ] 00:06:55.380 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.380 [2024-07-14 02:54:50.459391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.380 [2024-07-14 02:54:50.552766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.758 02:54:51 -- accel/accel.sh@18 -- # out=' 00:06:56.758 SPDK Configuration: 00:06:56.758 Core mask: 0x1 00:06:56.758 00:06:56.758 Accel Perf Configuration: 00:06:56.758 Workload Type: dualcast 00:06:56.758 Transfer size: 4096 bytes 00:06:56.758 Vector count 1 00:06:56.758 Module: software 00:06:56.758 Queue depth: 32 00:06:56.758 Allocate depth: 32 00:06:56.758 # threads/core: 1 00:06:56.758 Run time: 1 seconds 00:06:56.758 Verify: Yes 00:06:56.758 00:06:56.758 Running for 1 seconds... 00:06:56.758 00:06:56.758 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.758 ------------------------------------------------------------------------------------ 00:06:56.758 0,0 297312/s 1161 MiB/s 0 0 00:06:56.758 ==================================================================================== 00:06:56.758 Total 297312/s 1161 MiB/s 0 0' 00:06:56.758 02:54:51 -- accel/accel.sh@20 -- # IFS=: 00:06:56.758 02:54:51 -- accel/accel.sh@20 -- # read -r var val 00:06:56.758 02:54:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:56.758 02:54:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:56.758 02:54:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.758 02:54:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.758 02:54:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.758 02:54:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.758 02:54:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.758 02:54:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.758 02:54:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.758 02:54:51 -- accel/accel.sh@42 -- # jq -r . 00:06:56.758 [2024-07-14 02:54:51.799696] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:56.758 [2024-07-14 02:54:51.799777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891833 ] 00:06:56.758 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.758 [2024-07-14 02:54:51.862485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.758 [2024-07-14 02:54:51.953105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=0x1 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=dualcast 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=software 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=32 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=32 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=1 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val=Yes 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.017 02:54:52 -- accel/accel.sh@21 -- # val= 00:06:57.017 02:54:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # IFS=: 00:06:57.017 02:54:52 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@21 -- # val= 00:06:57.955 02:54:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # IFS=: 00:06:57.955 02:54:53 -- accel/accel.sh@20 -- # read -r var val 00:06:57.955 02:54:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:57.955 02:54:53 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:57.955 02:54:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.955 00:06:57.955 real 0m2.813s 00:06:57.955 user 0m2.524s 00:06:57.955 sys 0m0.280s 00:06:57.955 02:54:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.955 02:54:53 -- common/autotest_common.sh@10 -- # set +x 00:06:57.955 ************************************ 00:06:57.955 END TEST accel_dualcast 00:06:57.956 ************************************ 00:06:58.218 02:54:53 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:58.218 02:54:53 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:58.218 02:54:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.218 02:54:53 -- common/autotest_common.sh@10 -- # set +x 00:06:58.218 ************************************ 00:06:58.218 START TEST accel_compare 00:06:58.218 ************************************ 00:06:58.218 02:54:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:58.218 02:54:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.218 02:54:53 -- accel/accel.sh@17 -- # local accel_module 00:06:58.218 02:54:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:58.218 02:54:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:58.218 02:54:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.218 02:54:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.218 02:54:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.218 02:54:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.218 02:54:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.218 02:54:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.218 02:54:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.218 02:54:53 -- accel/accel.sh@42 -- # jq -r . 00:06:58.218 [2024-07-14 02:54:53.234088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:58.218 [2024-07-14 02:54:53.234180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1891994 ] 00:06:58.218 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.218 [2024-07-14 02:54:53.296026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.218 [2024-07-14 02:54:53.386438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.640 02:54:54 -- accel/accel.sh@18 -- # out=' 00:06:59.640 SPDK Configuration: 00:06:59.640 Core mask: 0x1 00:06:59.640 00:06:59.640 Accel Perf Configuration: 00:06:59.640 Workload Type: compare 00:06:59.640 Transfer size: 4096 bytes 00:06:59.640 Vector count 1 00:06:59.640 Module: software 00:06:59.640 Queue depth: 32 00:06:59.640 Allocate depth: 32 00:06:59.640 # threads/core: 1 00:06:59.640 Run time: 1 seconds 00:06:59.640 Verify: Yes 00:06:59.640 00:06:59.640 Running for 1 seconds... 00:06:59.640 00:06:59.640 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:59.640 ------------------------------------------------------------------------------------ 00:06:59.640 0,0 395264/s 1544 MiB/s 0 0 00:06:59.640 ==================================================================================== 00:06:59.640 Total 395264/s 1544 MiB/s 0 0' 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.640 02:54:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.640 02:54:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:59.640 02:54:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.640 02:54:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.640 02:54:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.640 02:54:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.640 02:54:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.640 02:54:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.640 02:54:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.640 02:54:54 -- accel/accel.sh@42 -- # jq -r . 00:06:59.640 [2024-07-14 02:54:54.642480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:59.640 [2024-07-14 02:54:54.642566] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892134 ] 00:06:59.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.640 [2024-07-14 02:54:54.705004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.640 [2024-07-14 02:54:54.796191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.640 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.640 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.640 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.640 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.640 02:54:54 -- accel/accel.sh@21 -- # val=0x1 00:06:59.640 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.640 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=compare 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=software 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@23 -- # accel_module=software 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=32 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=32 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=1 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val=Yes 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:06:59.641 02:54:54 -- accel/accel.sh@21 -- # val= 00:06:59.641 02:54:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # IFS=: 00:06:59.641 02:54:54 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@21 -- # val= 00:07:01.022 02:54:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # IFS=: 00:07:01.022 02:54:56 -- accel/accel.sh@20 -- # read -r var val 00:07:01.022 02:54:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.022 02:54:56 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:01.022 02:54:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.022 00:07:01.022 real 0m2.807s 00:07:01.022 user 0m2.509s 00:07:01.022 sys 0m0.290s 00:07:01.022 02:54:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.022 02:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:01.022 ************************************ 00:07:01.022 END TEST accel_compare 00:07:01.022 ************************************ 00:07:01.022 02:54:56 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:01.022 02:54:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:01.022 02:54:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.022 02:54:56 -- common/autotest_common.sh@10 -- # set +x 00:07:01.022 ************************************ 00:07:01.022 START TEST accel_xor 00:07:01.022 ************************************ 00:07:01.022 02:54:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:01.022 02:54:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.022 02:54:56 -- accel/accel.sh@17 -- # local accel_module 00:07:01.022 02:54:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:01.022 02:54:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:01.022 02:54:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.022 02:54:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.022 02:54:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.022 02:54:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.022 02:54:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.022 02:54:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.022 02:54:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.022 02:54:56 -- accel/accel.sh@42 -- # jq -r . 00:07:01.022 [2024-07-14 02:54:56.067975] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:01.022 [2024-07-14 02:54:56.068047] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892412 ] 00:07:01.022 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.022 [2024-07-14 02:54:56.129354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.022 [2024-07-14 02:54:56.219494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.401 02:54:57 -- accel/accel.sh@18 -- # out=' 00:07:02.401 SPDK Configuration: 00:07:02.401 Core mask: 0x1 00:07:02.401 00:07:02.401 Accel Perf Configuration: 00:07:02.401 Workload Type: xor 00:07:02.401 Source buffers: 2 00:07:02.401 Transfer size: 4096 bytes 00:07:02.401 Vector count 1 00:07:02.401 Module: software 00:07:02.401 Queue depth: 32 00:07:02.401 Allocate depth: 32 00:07:02.401 # threads/core: 1 00:07:02.401 Run time: 1 seconds 00:07:02.401 Verify: Yes 00:07:02.401 00:07:02.401 Running for 1 seconds... 00:07:02.401 00:07:02.401 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.401 ------------------------------------------------------------------------------------ 00:07:02.401 0,0 192864/s 753 MiB/s 0 0 00:07:02.401 ==================================================================================== 00:07:02.401 Total 192864/s 753 MiB/s 0 0' 00:07:02.401 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.401 02:54:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:02.401 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.401 02:54:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:02.401 02:54:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.401 02:54:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.401 02:54:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.401 02:54:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.401 02:54:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.401 02:54:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.401 02:54:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.401 02:54:57 -- accel/accel.sh@42 -- # jq -r . 00:07:02.401 [2024-07-14 02:54:57.468804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:02.401 [2024-07-14 02:54:57.468993] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892562 ] 00:07:02.401 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.401 [2024-07-14 02:54:57.529260] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.401 [2024-07-14 02:54:57.619805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.661 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=0x1 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=xor 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=2 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=software 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=32 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=32 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=1 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val=Yes 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:02.662 02:54:57 -- accel/accel.sh@21 -- # val= 00:07:02.662 02:54:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # IFS=: 00:07:02.662 02:54:57 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@21 -- # val= 00:07:04.041 02:54:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # IFS=: 00:07:04.041 02:54:58 -- accel/accel.sh@20 -- # read -r var val 00:07:04.041 02:54:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.041 02:54:58 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:04.041 02:54:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.041 00:07:04.041 real 0m2.810s 00:07:04.041 user 0m2.517s 00:07:04.041 sys 0m0.285s 00:07:04.041 02:54:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.041 02:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:04.041 ************************************ 00:07:04.041 END TEST accel_xor 00:07:04.041 ************************************ 00:07:04.041 02:54:58 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:04.041 02:54:58 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:04.041 02:54:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.041 02:54:58 -- common/autotest_common.sh@10 -- # set +x 00:07:04.041 ************************************ 00:07:04.041 START TEST accel_xor 00:07:04.041 ************************************ 00:07:04.041 02:54:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:04.041 02:54:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.041 02:54:58 -- accel/accel.sh@17 -- # local accel_module 00:07:04.041 02:54:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:04.041 02:54:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:04.041 02:54:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.041 02:54:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.041 02:54:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.041 02:54:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.041 02:54:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.041 02:54:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.041 02:54:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.041 02:54:58 -- accel/accel.sh@42 -- # jq -r . 00:07:04.041 [2024-07-14 02:54:58.902612] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:04.041 [2024-07-14 02:54:58.902692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892717 ] 00:07:04.041 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.041 [2024-07-14 02:54:58.964481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.041 [2024-07-14 02:54:59.054911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.421 02:55:00 -- accel/accel.sh@18 -- # out=' 00:07:05.421 SPDK Configuration: 00:07:05.421 Core mask: 0x1 00:07:05.421 00:07:05.421 Accel Perf Configuration: 00:07:05.421 Workload Type: xor 00:07:05.421 Source buffers: 3 00:07:05.421 Transfer size: 4096 bytes 00:07:05.421 Vector count 1 00:07:05.421 Module: software 00:07:05.421 Queue depth: 32 00:07:05.421 Allocate depth: 32 00:07:05.421 # threads/core: 1 00:07:05.421 Run time: 1 seconds 00:07:05.421 Verify: Yes 00:07:05.421 00:07:05.421 Running for 1 seconds... 00:07:05.421 00:07:05.422 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.422 ------------------------------------------------------------------------------------ 00:07:05.422 0,0 182944/s 714 MiB/s 0 0 00:07:05.422 ==================================================================================== 00:07:05.422 Total 182944/s 714 MiB/s 0 0' 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:05.422 02:55:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.422 02:55:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.422 02:55:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.422 02:55:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.422 02:55:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.422 02:55:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.422 02:55:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.422 02:55:00 -- accel/accel.sh@42 -- # jq -r . 00:07:05.422 [2024-07-14 02:55:00.304294] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:05.422 [2024-07-14 02:55:00.304374] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1892861 ] 00:07:05.422 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.422 [2024-07-14 02:55:00.365368] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.422 [2024-07-14 02:55:00.455848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=0x1 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=xor 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=3 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=software 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=32 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=32 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=1 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val=Yes 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:05.422 02:55:00 -- accel/accel.sh@21 -- # val= 00:07:05.422 02:55:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # IFS=: 00:07:05.422 02:55:00 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@21 -- # val= 00:07:06.801 02:55:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # IFS=: 00:07:06.801 02:55:01 -- accel/accel.sh@20 -- # read -r var val 00:07:06.801 02:55:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.801 02:55:01 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:06.801 02:55:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.801 00:07:06.801 real 0m2.794s 00:07:06.801 user 0m2.496s 00:07:06.801 sys 0m0.290s 00:07:06.801 02:55:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.801 02:55:01 -- common/autotest_common.sh@10 -- # set +x 00:07:06.801 ************************************ 00:07:06.801 END TEST accel_xor 00:07:06.801 ************************************ 00:07:06.801 02:55:01 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:06.801 02:55:01 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:06.801 02:55:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.801 02:55:01 -- common/autotest_common.sh@10 -- # set +x 00:07:06.801 ************************************ 00:07:06.801 START TEST accel_dif_verify 00:07:06.801 ************************************ 00:07:06.801 02:55:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:06.801 02:55:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.801 02:55:01 -- accel/accel.sh@17 -- # local accel_module 00:07:06.801 02:55:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:06.801 02:55:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:06.801 02:55:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.801 02:55:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.801 02:55:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.801 02:55:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.801 02:55:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.801 02:55:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.801 02:55:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.801 02:55:01 -- accel/accel.sh@42 -- # jq -r . 00:07:06.801 [2024-07-14 02:55:01.723366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:06.801 [2024-07-14 02:55:01.723445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893135 ] 00:07:06.801 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.801 [2024-07-14 02:55:01.783001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.802 [2024-07-14 02:55:01.871801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.181 02:55:03 -- accel/accel.sh@18 -- # out=' 00:07:08.181 SPDK Configuration: 00:07:08.181 Core mask: 0x1 00:07:08.181 00:07:08.181 Accel Perf Configuration: 00:07:08.181 Workload Type: dif_verify 00:07:08.181 Vector size: 4096 bytes 00:07:08.181 Transfer size: 4096 bytes 00:07:08.181 Block size: 512 bytes 00:07:08.181 Metadata size: 8 bytes 00:07:08.181 Vector count 1 00:07:08.181 Module: software 00:07:08.181 Queue depth: 32 00:07:08.181 Allocate depth: 32 00:07:08.181 # threads/core: 1 00:07:08.181 Run time: 1 seconds 00:07:08.181 Verify: No 00:07:08.181 00:07:08.181 Running for 1 seconds... 00:07:08.181 00:07:08.181 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.181 ------------------------------------------------------------------------------------ 00:07:08.181 0,0 81888/s 324 MiB/s 0 0 00:07:08.181 ==================================================================================== 00:07:08.181 Total 81888/s 319 MiB/s 0 0' 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:08.181 02:55:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.181 02:55:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.181 02:55:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.181 02:55:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.181 02:55:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.181 02:55:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.181 02:55:03 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.181 02:55:03 -- accel/accel.sh@42 -- # jq -r . 00:07:08.181 [2024-07-14 02:55:03.108627] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:08.181 [2024-07-14 02:55:03.108703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893283 ] 00:07:08.181 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.181 [2024-07-14 02:55:03.169739] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.181 [2024-07-14 02:55:03.260606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val=0x1 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val=dif_verify 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.181 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.181 02:55:03 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:08.181 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val=software 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val=32 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val=32 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val=1 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val=No 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:08.182 02:55:03 -- accel/accel.sh@21 -- # val= 00:07:08.182 02:55:03 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # IFS=: 00:07:08.182 02:55:03 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@21 -- # val= 00:07:09.562 02:55:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # IFS=: 00:07:09.562 02:55:04 -- accel/accel.sh@20 -- # read -r var val 00:07:09.562 02:55:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:09.562 02:55:04 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:09.562 02:55:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.562 00:07:09.562 real 0m2.790s 00:07:09.562 user 0m2.501s 00:07:09.562 sys 0m0.284s 00:07:09.562 02:55:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.562 02:55:04 -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 ************************************ 00:07:09.562 END TEST accel_dif_verify 00:07:09.562 ************************************ 00:07:09.562 02:55:04 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:09.562 02:55:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:09.562 02:55:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.562 02:55:04 -- common/autotest_common.sh@10 -- # set +x 00:07:09.562 ************************************ 00:07:09.562 START TEST accel_dif_generate 00:07:09.562 ************************************ 00:07:09.562 02:55:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:09.562 02:55:04 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.562 02:55:04 -- accel/accel.sh@17 -- # local accel_module 00:07:09.562 02:55:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:09.562 02:55:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:09.562 02:55:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.562 02:55:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.562 02:55:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.562 02:55:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.562 02:55:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.562 02:55:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.562 02:55:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.562 02:55:04 -- accel/accel.sh@42 -- # jq -r . 00:07:09.562 [2024-07-14 02:55:04.538809] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:09.562 [2024-07-14 02:55:04.538897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893442 ] 00:07:09.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.562 [2024-07-14 02:55:04.600246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.562 [2024-07-14 02:55:04.691264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.941 02:55:05 -- accel/accel.sh@18 -- # out=' 00:07:10.941 SPDK Configuration: 00:07:10.941 Core mask: 0x1 00:07:10.941 00:07:10.942 Accel Perf Configuration: 00:07:10.942 Workload Type: dif_generate 00:07:10.942 Vector size: 4096 bytes 00:07:10.942 Transfer size: 4096 bytes 00:07:10.942 Block size: 512 bytes 00:07:10.942 Metadata size: 8 bytes 00:07:10.942 Vector count 1 00:07:10.942 Module: software 00:07:10.942 Queue depth: 32 00:07:10.942 Allocate depth: 32 00:07:10.942 # threads/core: 1 00:07:10.942 Run time: 1 seconds 00:07:10.942 Verify: No 00:07:10.942 00:07:10.942 Running for 1 seconds... 00:07:10.942 00:07:10.942 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.942 ------------------------------------------------------------------------------------ 00:07:10.942 0,0 96448/s 382 MiB/s 0 0 00:07:10.942 ==================================================================================== 00:07:10.942 Total 96448/s 376 MiB/s 0 0' 00:07:10.942 02:55:05 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:10.942 02:55:05 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:10.942 02:55:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.942 02:55:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.942 02:55:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.942 02:55:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.942 02:55:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.942 02:55:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.942 02:55:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.942 02:55:05 -- accel/accel.sh@42 -- # jq -r . 00:07:10.942 [2024-07-14 02:55:05.939214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:10.942 [2024-07-14 02:55:05.939309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893583 ] 00:07:10.942 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.942 [2024-07-14 02:55:05.999339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.942 [2024-07-14 02:55:06.089531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=0x1 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=dif_generate 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=software 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=32 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=32 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=1 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val=No 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:10.942 02:55:06 -- accel/accel.sh@21 -- # val= 00:07:10.942 02:55:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # IFS=: 00:07:10.942 02:55:06 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.322 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.322 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.322 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.322 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.322 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.323 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.323 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.323 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.323 02:55:07 -- accel/accel.sh@21 -- # val= 00:07:12.323 02:55:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # IFS=: 00:07:12.323 02:55:07 -- accel/accel.sh@20 -- # read -r var val 00:07:12.323 02:55:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.323 02:55:07 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:12.323 02:55:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.323 00:07:12.323 real 0m2.793s 00:07:12.323 user 0m2.501s 00:07:12.323 sys 0m0.286s 00:07:12.323 02:55:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.323 02:55:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 ************************************ 00:07:12.323 END TEST accel_dif_generate 00:07:12.323 ************************************ 00:07:12.323 02:55:07 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:12.323 02:55:07 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:12.323 02:55:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.323 02:55:07 -- common/autotest_common.sh@10 -- # set +x 00:07:12.323 ************************************ 00:07:12.323 START TEST accel_dif_generate_copy 00:07:12.323 ************************************ 00:07:12.323 02:55:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:12.323 02:55:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.323 02:55:07 -- accel/accel.sh@17 -- # local accel_module 00:07:12.323 02:55:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:12.323 02:55:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:12.323 02:55:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.323 02:55:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.323 02:55:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.323 02:55:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.323 02:55:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.323 02:55:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.323 02:55:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.323 02:55:07 -- accel/accel.sh@42 -- # jq -r . 00:07:12.323 [2024-07-14 02:55:07.359248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:12.323 [2024-07-14 02:55:07.359324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1893841 ] 00:07:12.323 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.323 [2024-07-14 02:55:07.421045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.323 [2024-07-14 02:55:07.509603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.703 02:55:08 -- accel/accel.sh@18 -- # out=' 00:07:13.703 SPDK Configuration: 00:07:13.703 Core mask: 0x1 00:07:13.703 00:07:13.703 Accel Perf Configuration: 00:07:13.703 Workload Type: dif_generate_copy 00:07:13.703 Vector size: 4096 bytes 00:07:13.703 Transfer size: 4096 bytes 00:07:13.703 Vector count 1 00:07:13.703 Module: software 00:07:13.703 Queue depth: 32 00:07:13.703 Allocate depth: 32 00:07:13.703 # threads/core: 1 00:07:13.703 Run time: 1 seconds 00:07:13.703 Verify: No 00:07:13.703 00:07:13.703 Running for 1 seconds... 00:07:13.703 00:07:13.703 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.703 ------------------------------------------------------------------------------------ 00:07:13.703 0,0 76192/s 302 MiB/s 0 0 00:07:13.703 ==================================================================================== 00:07:13.703 Total 76192/s 297 MiB/s 0 0' 00:07:13.703 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.703 02:55:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:13.703 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.703 02:55:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:13.703 02:55:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.703 02:55:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.703 02:55:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.703 02:55:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.703 02:55:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.703 02:55:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.703 02:55:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.703 02:55:08 -- accel/accel.sh@42 -- # jq -r . 00:07:13.703 [2024-07-14 02:55:08.742861] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:13.703 [2024-07-14 02:55:08.742966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894010 ] 00:07:13.703 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.703 [2024-07-14 02:55:08.804407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.703 [2024-07-14 02:55:08.895015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=0x1 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=software 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=32 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=32 00:07:13.963 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.963 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.963 02:55:08 -- accel/accel.sh@21 -- # val=1 00:07:13.964 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.964 02:55:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.964 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.964 02:55:08 -- accel/accel.sh@21 -- # val=No 00:07:13.964 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.964 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.964 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:13.964 02:55:08 -- accel/accel.sh@21 -- # val= 00:07:13.964 02:55:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # IFS=: 00:07:13.964 02:55:08 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.899 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.899 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.899 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.899 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.900 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.900 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.900 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.900 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.900 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.900 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.900 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.900 02:55:10 -- accel/accel.sh@21 -- # val= 00:07:14.900 02:55:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # IFS=: 00:07:14.900 02:55:10 -- accel/accel.sh@20 -- # read -r var val 00:07:14.900 02:55:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.900 02:55:10 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:14.900 02:55:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.900 00:07:14.900 real 0m2.797s 00:07:14.900 user 0m2.512s 00:07:14.900 sys 0m0.277s 00:07:14.900 02:55:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.900 02:55:10 -- common/autotest_common.sh@10 -- # set +x 00:07:14.900 ************************************ 00:07:14.900 END TEST accel_dif_generate_copy 00:07:14.900 ************************************ 00:07:15.184 02:55:10 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:15.184 02:55:10 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.184 02:55:10 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:15.184 02:55:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.184 02:55:10 -- common/autotest_common.sh@10 -- # set +x 00:07:15.184 ************************************ 00:07:15.184 START TEST accel_comp 00:07:15.184 ************************************ 00:07:15.184 02:55:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.184 02:55:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.184 02:55:10 -- accel/accel.sh@17 -- # local accel_module 00:07:15.184 02:55:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.184 02:55:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.184 02:55:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.184 02:55:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.184 02:55:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.184 02:55:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.184 02:55:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.184 02:55:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.184 02:55:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.184 02:55:10 -- accel/accel.sh@42 -- # jq -r . 00:07:15.184 [2024-07-14 02:55:10.181509] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:15.184 [2024-07-14 02:55:10.181586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894164 ] 00:07:15.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.184 [2024-07-14 02:55:10.242604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.184 [2024-07-14 02:55:10.338093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.563 02:55:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.563 00:07:16.563 SPDK Configuration: 00:07:16.563 Core mask: 0x1 00:07:16.563 00:07:16.563 Accel Perf Configuration: 00:07:16.563 Workload Type: compress 00:07:16.563 Transfer size: 4096 bytes 00:07:16.563 Vector count 1 00:07:16.563 Module: software 00:07:16.563 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.563 Queue depth: 32 00:07:16.563 Allocate depth: 32 00:07:16.563 # threads/core: 1 00:07:16.563 Run time: 1 seconds 00:07:16.563 Verify: No 00:07:16.563 00:07:16.563 Running for 1 seconds... 00:07:16.563 00:07:16.563 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.563 ------------------------------------------------------------------------------------ 00:07:16.563 0,0 32320/s 134 MiB/s 0 0 00:07:16.563 ==================================================================================== 00:07:16.563 Total 32320/s 126 MiB/s 0 0' 00:07:16.563 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.563 02:55:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.563 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.563 02:55:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.563 02:55:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.563 02:55:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.563 02:55:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.563 02:55:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.563 02:55:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.563 02:55:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.563 02:55:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.563 02:55:11 -- accel/accel.sh@42 -- # jq -r . 00:07:16.563 [2024-07-14 02:55:11.599115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:16.563 [2024-07-14 02:55:11.599196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894312 ] 00:07:16.563 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.564 [2024-07-14 02:55:11.660976] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.564 [2024-07-14 02:55:11.754525] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=0x1 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=compress 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=software 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=32 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=32 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=1 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val=No 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:16.821 02:55:11 -- accel/accel.sh@21 -- # val= 00:07:16.821 02:55:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # IFS=: 00:07:16.821 02:55:11 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@21 -- # val= 00:07:17.757 02:55:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # IFS=: 00:07:17.757 02:55:12 -- accel/accel.sh@20 -- # read -r var val 00:07:17.757 02:55:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.757 02:55:12 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:17.757 02:55:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.757 00:07:17.757 real 0m2.831s 00:07:17.757 user 0m2.542s 00:07:17.757 sys 0m0.283s 00:07:17.757 02:55:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.757 02:55:12 -- common/autotest_common.sh@10 -- # set +x 00:07:17.757 ************************************ 00:07:17.757 END TEST accel_comp 00:07:17.757 ************************************ 00:07:18.015 02:55:13 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.016 02:55:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:18.016 02:55:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.016 02:55:13 -- common/autotest_common.sh@10 -- # set +x 00:07:18.016 ************************************ 00:07:18.016 START TEST accel_decomp 00:07:18.016 ************************************ 00:07:18.016 02:55:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.016 02:55:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.016 02:55:13 -- accel/accel.sh@17 -- # local accel_module 00:07:18.016 02:55:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.016 02:55:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:18.016 02:55:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.016 02:55:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.016 02:55:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.016 02:55:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.016 02:55:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.016 02:55:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.016 02:55:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.016 02:55:13 -- accel/accel.sh@42 -- # jq -r . 00:07:18.016 [2024-07-14 02:55:13.038946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:18.016 [2024-07-14 02:55:13.039024] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894543 ] 00:07:18.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.016 [2024-07-14 02:55:13.103981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.016 [2024-07-14 02:55:13.196331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.394 02:55:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:19.394 00:07:19.394 SPDK Configuration: 00:07:19.394 Core mask: 0x1 00:07:19.394 00:07:19.394 Accel Perf Configuration: 00:07:19.394 Workload Type: decompress 00:07:19.394 Transfer size: 4096 bytes 00:07:19.394 Vector count 1 00:07:19.394 Module: software 00:07:19.394 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.394 Queue depth: 32 00:07:19.394 Allocate depth: 32 00:07:19.394 # threads/core: 1 00:07:19.394 Run time: 1 seconds 00:07:19.394 Verify: Yes 00:07:19.394 00:07:19.394 Running for 1 seconds... 00:07:19.394 00:07:19.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.394 ------------------------------------------------------------------------------------ 00:07:19.394 0,0 55392/s 102 MiB/s 0 0 00:07:19.394 ==================================================================================== 00:07:19.394 Total 55392/s 216 MiB/s 0 0' 00:07:19.394 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.394 02:55:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.394 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.394 02:55:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:19.394 02:55:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.394 02:55:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.394 02:55:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.394 02:55:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.394 02:55:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.394 02:55:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.394 02:55:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.394 02:55:14 -- accel/accel.sh@42 -- # jq -r . 00:07:19.394 [2024-07-14 02:55:14.436970] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:19.394 [2024-07-14 02:55:14.437050] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894738 ] 00:07:19.394 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.394 [2024-07-14 02:55:14.503457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.394 [2024-07-14 02:55:14.596615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=0x1 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=decompress 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=software 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=32 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=32 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=1 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val=Yes 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:19.654 02:55:14 -- accel/accel.sh@21 -- # val= 00:07:19.654 02:55:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # IFS=: 00:07:19.654 02:55:14 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@21 -- # val= 00:07:20.592 02:55:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # IFS=: 00:07:20.592 02:55:15 -- accel/accel.sh@20 -- # read -r var val 00:07:20.592 02:55:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.592 02:55:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:20.592 02:55:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.592 00:07:20.592 real 0m2.820s 00:07:20.592 user 0m2.501s 00:07:20.592 sys 0m0.313s 00:07:20.592 02:55:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.592 02:55:15 -- common/autotest_common.sh@10 -- # set +x 00:07:20.592 ************************************ 00:07:20.592 END TEST accel_decomp 00:07:20.592 ************************************ 00:07:20.852 02:55:15 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.852 02:55:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:20.852 02:55:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.852 02:55:15 -- common/autotest_common.sh@10 -- # set +x 00:07:20.852 ************************************ 00:07:20.852 START TEST accel_decmop_full 00:07:20.852 ************************************ 00:07:20.852 02:55:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.852 02:55:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.852 02:55:15 -- accel/accel.sh@17 -- # local accel_module 00:07:20.852 02:55:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.852 02:55:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:20.852 02:55:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.852 02:55:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.852 02:55:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.852 02:55:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.852 02:55:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.852 02:55:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.852 02:55:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.852 02:55:15 -- accel/accel.sh@42 -- # jq -r . 00:07:20.852 [2024-07-14 02:55:15.888420] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:20.853 [2024-07-14 02:55:15.888500] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1894893 ] 00:07:20.853 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.853 [2024-07-14 02:55:15.949990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.853 [2024-07-14 02:55:16.043574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.229 02:55:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.229 00:07:22.229 SPDK Configuration: 00:07:22.229 Core mask: 0x1 00:07:22.229 00:07:22.229 Accel Perf Configuration: 00:07:22.229 Workload Type: decompress 00:07:22.229 Transfer size: 111250 bytes 00:07:22.229 Vector count 1 00:07:22.229 Module: software 00:07:22.229 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.229 Queue depth: 32 00:07:22.229 Allocate depth: 32 00:07:22.229 # threads/core: 1 00:07:22.229 Run time: 1 seconds 00:07:22.229 Verify: Yes 00:07:22.229 00:07:22.229 Running for 1 seconds... 00:07:22.229 00:07:22.229 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.229 ------------------------------------------------------------------------------------ 00:07:22.229 0,0 3808/s 157 MiB/s 0 0 00:07:22.229 ==================================================================================== 00:07:22.229 Total 3808/s 404 MiB/s 0 0' 00:07:22.229 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.229 02:55:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.229 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.229 02:55:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:22.229 02:55:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.229 02:55:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.229 02:55:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.229 02:55:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.229 02:55:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.229 02:55:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.229 02:55:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.229 02:55:17 -- accel/accel.sh@42 -- # jq -r . 00:07:22.229 [2024-07-14 02:55:17.317917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:22.229 [2024-07-14 02:55:17.318000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895033 ] 00:07:22.229 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.229 [2024-07-14 02:55:17.383586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.229 [2024-07-14 02:55:17.476961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.487 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.487 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.487 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=0x1 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=decompress 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=software 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=32 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=32 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=1 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val=Yes 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:22.488 02:55:17 -- accel/accel.sh@21 -- # val= 00:07:22.488 02:55:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # IFS=: 00:07:22.488 02:55:17 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@21 -- # val= 00:07:23.869 02:55:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # IFS=: 00:07:23.869 02:55:18 -- accel/accel.sh@20 -- # read -r var val 00:07:23.869 02:55:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.869 02:55:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:23.869 02:55:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.869 00:07:23.869 real 0m2.857s 00:07:23.869 user 0m2.563s 00:07:23.869 sys 0m0.287s 00:07:23.869 02:55:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.869 02:55:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.869 ************************************ 00:07:23.869 END TEST accel_decmop_full 00:07:23.869 ************************************ 00:07:23.869 02:55:18 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.869 02:55:18 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:23.869 02:55:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.869 02:55:18 -- common/autotest_common.sh@10 -- # set +x 00:07:23.869 ************************************ 00:07:23.869 START TEST accel_decomp_mcore 00:07:23.869 ************************************ 00:07:23.869 02:55:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.869 02:55:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.869 02:55:18 -- accel/accel.sh@17 -- # local accel_module 00:07:23.869 02:55:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.869 02:55:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:23.869 02:55:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.869 02:55:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.869 02:55:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.869 02:55:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.869 02:55:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.869 02:55:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.869 02:55:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.869 02:55:18 -- accel/accel.sh@42 -- # jq -r . 00:07:23.869 [2024-07-14 02:55:18.773849] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:23.870 [2024-07-14 02:55:18.773954] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895311 ] 00:07:23.870 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.870 [2024-07-14 02:55:18.836505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:23.870 [2024-07-14 02:55:18.932624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.870 [2024-07-14 02:55:18.932679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.870 [2024-07-14 02:55:18.932730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.870 [2024-07-14 02:55:18.932733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.251 02:55:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.251 00:07:25.252 SPDK Configuration: 00:07:25.252 Core mask: 0xf 00:07:25.252 00:07:25.252 Accel Perf Configuration: 00:07:25.252 Workload Type: decompress 00:07:25.252 Transfer size: 4096 bytes 00:07:25.252 Vector count 1 00:07:25.252 Module: software 00:07:25.252 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.252 Queue depth: 32 00:07:25.252 Allocate depth: 32 00:07:25.252 # threads/core: 1 00:07:25.252 Run time: 1 seconds 00:07:25.252 Verify: Yes 00:07:25.252 00:07:25.252 Running for 1 seconds... 00:07:25.252 00:07:25.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.252 ------------------------------------------------------------------------------------ 00:07:25.252 0,0 56736/s 104 MiB/s 0 0 00:07:25.252 3,0 57504/s 105 MiB/s 0 0 00:07:25.252 2,0 57472/s 105 MiB/s 0 0 00:07:25.252 1,0 57344/s 105 MiB/s 0 0 00:07:25.252 ==================================================================================== 00:07:25.252 Total 229056/s 894 MiB/s 0 0' 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:25.252 02:55:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.252 02:55:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.252 02:55:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.252 02:55:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.252 02:55:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.252 02:55:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.252 02:55:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.252 02:55:20 -- accel/accel.sh@42 -- # jq -r . 00:07:25.252 [2024-07-14 02:55:20.173035] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:25.252 [2024-07-14 02:55:20.173116] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895461 ] 00:07:25.252 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.252 [2024-07-14 02:55:20.236401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:25.252 [2024-07-14 02:55:20.332561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.252 [2024-07-14 02:55:20.332619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.252 [2024-07-14 02:55:20.332671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.252 [2024-07-14 02:55:20.332674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=0xf 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=decompress 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=software 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=32 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=32 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=1 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val=Yes 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:25.252 02:55:20 -- accel/accel.sh@21 -- # val= 00:07:25.252 02:55:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # IFS=: 00:07:25.252 02:55:20 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@21 -- # val= 00:07:26.633 02:55:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # IFS=: 00:07:26.633 02:55:21 -- accel/accel.sh@20 -- # read -r var val 00:07:26.633 02:55:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.633 02:55:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.633 02:55:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.633 00:07:26.633 real 0m2.828s 00:07:26.633 user 0m9.407s 00:07:26.633 sys 0m0.300s 00:07:26.633 02:55:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.633 02:55:21 -- common/autotest_common.sh@10 -- # set +x 00:07:26.633 ************************************ 00:07:26.633 END TEST accel_decomp_mcore 00:07:26.633 ************************************ 00:07:26.633 02:55:21 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.633 02:55:21 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:26.633 02:55:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.633 02:55:21 -- common/autotest_common.sh@10 -- # set +x 00:07:26.633 ************************************ 00:07:26.633 START TEST accel_decomp_full_mcore 00:07:26.633 ************************************ 00:07:26.633 02:55:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.633 02:55:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.633 02:55:21 -- accel/accel.sh@17 -- # local accel_module 00:07:26.633 02:55:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.634 02:55:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:26.634 02:55:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.634 02:55:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.634 02:55:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.634 02:55:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.634 02:55:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.634 02:55:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.634 02:55:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.634 02:55:21 -- accel/accel.sh@42 -- # jq -r . 00:07:26.634 [2024-07-14 02:55:21.630186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:26.634 [2024-07-14 02:55:21.630268] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895626 ] 00:07:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.634 [2024-07-14 02:55:21.691644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.634 [2024-07-14 02:55:21.788978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.634 [2024-07-14 02:55:21.789033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.634 [2024-07-14 02:55:21.789090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.634 [2024-07-14 02:55:21.789093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.011 02:55:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.011 00:07:28.011 SPDK Configuration: 00:07:28.011 Core mask: 0xf 00:07:28.011 00:07:28.011 Accel Perf Configuration: 00:07:28.011 Workload Type: decompress 00:07:28.011 Transfer size: 111250 bytes 00:07:28.011 Vector count 1 00:07:28.011 Module: software 00:07:28.011 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.011 Queue depth: 32 00:07:28.011 Allocate depth: 32 00:07:28.011 # threads/core: 1 00:07:28.011 Run time: 1 seconds 00:07:28.011 Verify: Yes 00:07:28.011 00:07:28.011 Running for 1 seconds... 00:07:28.011 00:07:28.011 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.011 ------------------------------------------------------------------------------------ 00:07:28.011 0,0 3776/s 155 MiB/s 0 0 00:07:28.011 3,0 3808/s 157 MiB/s 0 0 00:07:28.011 2,0 3776/s 155 MiB/s 0 0 00:07:28.011 1,0 3808/s 157 MiB/s 0 0 00:07:28.011 ==================================================================================== 00:07:28.011 Total 15168/s 1609 MiB/s 0 0' 00:07:28.011 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.011 02:55:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.011 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.011 02:55:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:28.011 02:55:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.011 02:55:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.011 02:55:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.011 02:55:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.011 02:55:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.011 02:55:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.011 02:55:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.011 02:55:23 -- accel/accel.sh@42 -- # jq -r . 00:07:28.011 [2024-07-14 02:55:23.071568] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:28.011 [2024-07-14 02:55:23.071649] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895769 ] 00:07:28.011 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.011 [2024-07-14 02:55:23.136596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:28.011 [2024-07-14 02:55:23.232738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:28.011 [2024-07-14 02:55:23.232792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:28.011 [2024-07-14 02:55:23.232843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:28.011 [2024-07-14 02:55:23.232846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.268 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.268 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.268 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.268 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.268 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=0xf 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=decompress 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=software 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=32 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=32 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=1 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val=Yes 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:28.269 02:55:23 -- accel/accel.sh@21 -- # val= 00:07:28.269 02:55:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # IFS=: 00:07:28.269 02:55:23 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@21 -- # val= 00:07:29.647 02:55:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # IFS=: 00:07:29.647 02:55:24 -- accel/accel.sh@20 -- # read -r var val 00:07:29.647 02:55:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.647 02:55:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.647 02:55:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.647 00:07:29.647 real 0m2.871s 00:07:29.647 user 0m9.541s 00:07:29.647 sys 0m0.318s 00:07:29.647 02:55:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.647 02:55:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.647 ************************************ 00:07:29.647 END TEST accel_decomp_full_mcore 00:07:29.647 ************************************ 00:07:29.647 02:55:24 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.647 02:55:24 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:29.647 02:55:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.647 02:55:24 -- common/autotest_common.sh@10 -- # set +x 00:07:29.647 ************************************ 00:07:29.647 START TEST accel_decomp_mthread 00:07:29.647 ************************************ 00:07:29.647 02:55:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.647 02:55:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.647 02:55:24 -- accel/accel.sh@17 -- # local accel_module 00:07:29.647 02:55:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.647 02:55:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:29.647 02:55:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.647 02:55:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.647 02:55:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.647 02:55:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.647 02:55:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.647 02:55:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.647 02:55:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.647 02:55:24 -- accel/accel.sh@42 -- # jq -r . 00:07:29.647 [2024-07-14 02:55:24.525671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:29.647 [2024-07-14 02:55:24.525751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896050 ] 00:07:29.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.647 [2024-07-14 02:55:24.586472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.647 [2024-07-14 02:55:24.684753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.031 02:55:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:31.031 00:07:31.031 SPDK Configuration: 00:07:31.031 Core mask: 0x1 00:07:31.031 00:07:31.031 Accel Perf Configuration: 00:07:31.031 Workload Type: decompress 00:07:31.031 Transfer size: 4096 bytes 00:07:31.031 Vector count 1 00:07:31.031 Module: software 00:07:31.031 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.031 Queue depth: 32 00:07:31.031 Allocate depth: 32 00:07:31.031 # threads/core: 2 00:07:31.031 Run time: 1 seconds 00:07:31.031 Verify: Yes 00:07:31.031 00:07:31.031 Running for 1 seconds... 00:07:31.031 00:07:31.031 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:31.031 ------------------------------------------------------------------------------------ 00:07:31.031 0,1 28096/s 51 MiB/s 0 0 00:07:31.031 0,0 28000/s 51 MiB/s 0 0 00:07:31.031 ==================================================================================== 00:07:31.031 Total 56096/s 219 MiB/s 0 0' 00:07:31.031 02:55:25 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 02:55:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.031 02:55:25 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 02:55:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:31.031 02:55:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.031 02:55:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.031 02:55:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.031 02:55:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.031 02:55:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.031 02:55:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.031 02:55:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.031 02:55:25 -- accel/accel.sh@42 -- # jq -r . 00:07:31.031 [2024-07-14 02:55:25.937490] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:31.031 [2024-07-14 02:55:25.937573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896198 ] 00:07:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.031 [2024-07-14 02:55:25.997170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.031 [2024-07-14 02:55:26.089254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.031 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.031 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.031 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.031 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 02:55:26 -- accel/accel.sh@21 -- # val=0x1 00:07:31.031 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.031 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.031 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.031 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=decompress 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=software 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=32 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=32 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=2 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val=Yes 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:31.032 02:55:26 -- accel/accel.sh@21 -- # val= 00:07:31.032 02:55:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # IFS=: 00:07:31.032 02:55:26 -- accel/accel.sh@20 -- # read -r var val 00:07:32.453 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.453 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.453 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.453 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.453 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@21 -- # val= 00:07:32.454 02:55:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # IFS=: 00:07:32.454 02:55:27 -- accel/accel.sh@20 -- # read -r var val 00:07:32.454 02:55:27 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.454 02:55:27 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:32.454 02:55:27 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.454 00:07:32.454 real 0m2.831s 00:07:32.454 user 0m2.532s 00:07:32.454 sys 0m0.292s 00:07:32.454 02:55:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.454 02:55:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.454 ************************************ 00:07:32.454 END TEST accel_decomp_mthread 00:07:32.454 ************************************ 00:07:32.454 02:55:27 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.454 02:55:27 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:32.454 02:55:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.454 02:55:27 -- common/autotest_common.sh@10 -- # set +x 00:07:32.454 ************************************ 00:07:32.454 START TEST accel_deomp_full_mthread 00:07:32.454 ************************************ 00:07:32.454 02:55:27 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.454 02:55:27 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.454 02:55:27 -- accel/accel.sh@17 -- # local accel_module 00:07:32.454 02:55:27 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.454 02:55:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.454 02:55:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.454 02:55:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.454 02:55:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.454 02:55:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.454 02:55:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.454 02:55:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.454 02:55:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.454 02:55:27 -- accel/accel.sh@42 -- # jq -r . 00:07:32.454 [2024-07-14 02:55:27.382439] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:32.454 [2024-07-14 02:55:27.382519] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896355 ] 00:07:32.454 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.454 [2024-07-14 02:55:27.447266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.454 [2024-07-14 02:55:27.541443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.833 02:55:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.833 00:07:33.833 SPDK Configuration: 00:07:33.833 Core mask: 0x1 00:07:33.833 00:07:33.833 Accel Perf Configuration: 00:07:33.833 Workload Type: decompress 00:07:33.833 Transfer size: 111250 bytes 00:07:33.833 Vector count 1 00:07:33.833 Module: software 00:07:33.833 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.833 Queue depth: 32 00:07:33.833 Allocate depth: 32 00:07:33.833 # threads/core: 2 00:07:33.833 Run time: 1 seconds 00:07:33.833 Verify: Yes 00:07:33.833 00:07:33.833 Running for 1 seconds... 00:07:33.833 00:07:33.833 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.833 ------------------------------------------------------------------------------------ 00:07:33.833 0,1 1952/s 80 MiB/s 0 0 00:07:33.833 0,0 1920/s 79 MiB/s 0 0 00:07:33.833 ==================================================================================== 00:07:33.833 Total 3872/s 410 MiB/s 0 0' 00:07:33.833 02:55:28 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.833 02:55:28 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:33.833 02:55:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.833 02:55:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.833 02:55:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.833 02:55:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.833 02:55:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.833 02:55:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.833 02:55:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.833 02:55:28 -- accel/accel.sh@42 -- # jq -r . 00:07:33.833 [2024-07-14 02:55:28.837610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:33.833 [2024-07-14 02:55:28.837692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896501 ] 00:07:33.833 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.833 [2024-07-14 02:55:28.899350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.833 [2024-07-14 02:55:28.991723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=0x1 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=decompress 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=software 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=32 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=32 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=2 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val=Yes 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:33.833 02:55:29 -- accel/accel.sh@21 -- # val= 00:07:33.833 02:55:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # IFS=: 00:07:33.833 02:55:29 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@21 -- # val= 00:07:35.206 02:55:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # IFS=: 00:07:35.206 02:55:30 -- accel/accel.sh@20 -- # read -r var val 00:07:35.206 02:55:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.206 02:55:30 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:35.206 02:55:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.206 00:07:35.206 real 0m2.905s 00:07:35.206 user 0m2.606s 00:07:35.206 sys 0m0.292s 00:07:35.206 02:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.206 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.206 ************************************ 00:07:35.206 END TEST accel_deomp_full_mthread 00:07:35.206 ************************************ 00:07:35.206 02:55:30 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:35.206 02:55:30 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.206 02:55:30 -- accel/accel.sh@129 -- # build_accel_config 00:07:35.206 02:55:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.206 02:55:30 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:35.206 02:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.206 02:55:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.206 02:55:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.206 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.206 02:55:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.206 02:55:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.206 02:55:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.206 02:55:30 -- accel/accel.sh@42 -- # jq -r . 00:07:35.206 ************************************ 00:07:35.206 START TEST accel_dif_functional_tests 00:07:35.206 ************************************ 00:07:35.206 02:55:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:35.206 [2024-07-14 02:55:30.334719] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:35.206 [2024-07-14 02:55:30.334792] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896774 ] 00:07:35.206 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.206 [2024-07-14 02:55:30.396165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:35.465 [2024-07-14 02:55:30.489723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.465 [2024-07-14 02:55:30.489783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.465 [2024-07-14 02:55:30.489786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.465 00:07:35.465 00:07:35.465 CUnit - A unit testing framework for C - Version 2.1-3 00:07:35.465 http://cunit.sourceforge.net/ 00:07:35.465 00:07:35.465 00:07:35.465 Suite: accel_dif 00:07:35.465 Test: verify: DIF generated, GUARD check ...passed 00:07:35.465 Test: verify: DIF generated, APPTAG check ...passed 00:07:35.465 Test: verify: DIF generated, REFTAG check ...passed 00:07:35.465 Test: verify: DIF not generated, GUARD check ...[2024-07-14 02:55:30.581292] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.465 [2024-07-14 02:55:30.581357] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:35.465 passed 00:07:35.465 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 02:55:30.581398] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.465 [2024-07-14 02:55:30.581428] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:35.465 passed 00:07:35.465 Test: verify: DIF not generated, REFTAG check ...[2024-07-14 02:55:30.581462] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.465 [2024-07-14 02:55:30.581489] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:35.465 passed 00:07:35.465 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:35.465 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 02:55:30.581557] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:35.465 passed 00:07:35.465 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:35.465 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:35.465 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:35.465 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 02:55:30.581706] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:35.465 passed 00:07:35.465 Test: generate copy: DIF generated, GUARD check ...passed 00:07:35.465 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:35.465 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:35.465 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:35.465 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:35.465 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:35.465 Test: generate copy: iovecs-len validate ...[2024-07-14 02:55:30.581963] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:35.465 passed 00:07:35.465 Test: generate copy: buffer alignment validate ...passed 00:07:35.465 00:07:35.465 Run Summary: Type Total Ran Passed Failed Inactive 00:07:35.465 suites 1 1 n/a 0 0 00:07:35.465 tests 20 20 20 0 0 00:07:35.465 asserts 204 204 204 0 n/a 00:07:35.465 00:07:35.465 Elapsed time = 0.003 seconds 00:07:35.723 00:07:35.723 real 0m0.507s 00:07:35.723 user 0m0.807s 00:07:35.723 sys 0m0.167s 00:07:35.723 02:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.723 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 ************************************ 00:07:35.723 END TEST accel_dif_functional_tests 00:07:35.723 ************************************ 00:07:35.723 00:07:35.723 real 0m59.916s 00:07:35.723 user 1m7.663s 00:07:35.723 sys 0m7.258s 00:07:35.723 02:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.723 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 ************************************ 00:07:35.723 END TEST accel 00:07:35.723 ************************************ 00:07:35.723 02:55:30 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.723 02:55:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.723 02:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.723 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 ************************************ 00:07:35.723 START TEST accel_rpc 00:07:35.723 ************************************ 00:07:35.723 02:55:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.723 * Looking for test storage... 00:07:35.723 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:35.723 02:55:30 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.723 02:55:30 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1896848 00:07:35.723 02:55:30 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.723 02:55:30 -- accel/accel_rpc.sh@15 -- # waitforlisten 1896848 00:07:35.723 02:55:30 -- common/autotest_common.sh@819 -- # '[' -z 1896848 ']' 00:07:35.723 02:55:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.723 02:55:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.723 02:55:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.723 02:55:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.723 02:55:30 -- common/autotest_common.sh@10 -- # set +x 00:07:35.723 [2024-07-14 02:55:30.956170] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:35.723 [2024-07-14 02:55:30.956278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1896848 ] 00:07:35.981 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.981 [2024-07-14 02:55:31.014264] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.981 [2024-07-14 02:55:31.095951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.981 [2024-07-14 02:55:31.096110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.981 02:55:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:35.981 02:55:31 -- common/autotest_common.sh@852 -- # return 0 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:35.981 02:55:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:35.981 02:55:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.981 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 ************************************ 00:07:35.981 START TEST accel_assign_opcode 00:07:35.981 ************************************ 00:07:35.981 02:55:31 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:35.981 02:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.981 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 [2024-07-14 02:55:31.152634] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:35.981 02:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:35.981 02:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.981 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 [2024-07-14 02:55:31.160650] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:35.981 02:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:35.981 02:55:31 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:35.981 02:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:35.981 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 02:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.238 02:55:31 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.238 02:55:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.238 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 02:55:31 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.238 02:55:31 -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.238 02:55:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.238 software 00:07:36.238 00:07:36.238 real 0m0.293s 00:07:36.238 user 0m0.040s 00:07:36.238 sys 0m0.007s 00:07:36.238 02:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.238 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.238 ************************************ 00:07:36.238 END TEST accel_assign_opcode 00:07:36.238 ************************************ 00:07:36.238 02:55:31 -- accel/accel_rpc.sh@55 -- # killprocess 1896848 00:07:36.238 02:55:31 -- common/autotest_common.sh@926 -- # '[' -z 1896848 ']' 00:07:36.238 02:55:31 -- common/autotest_common.sh@930 -- # kill -0 1896848 00:07:36.238 02:55:31 -- common/autotest_common.sh@931 -- # uname 00:07:36.238 02:55:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.238 02:55:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1896848 00:07:36.498 02:55:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.498 02:55:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.498 02:55:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1896848' 00:07:36.498 killing process with pid 1896848 00:07:36.498 02:55:31 -- common/autotest_common.sh@945 -- # kill 1896848 00:07:36.498 02:55:31 -- common/autotest_common.sh@950 -- # wait 1896848 00:07:36.756 00:07:36.756 real 0m1.044s 00:07:36.756 user 0m0.974s 00:07:36.756 sys 0m0.395s 00:07:36.756 02:55:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.756 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.756 ************************************ 00:07:36.756 END TEST accel_rpc 00:07:36.756 ************************************ 00:07:36.756 02:55:31 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.756 02:55:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.756 02:55:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.756 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:36.756 ************************************ 00:07:36.756 START TEST app_cmdline 00:07:36.756 ************************************ 00:07:36.756 02:55:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.756 * Looking for test storage... 00:07:36.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:36.756 02:55:31 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.756 02:55:31 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1897053 00:07:36.756 02:55:31 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.756 02:55:31 -- app/cmdline.sh@18 -- # waitforlisten 1897053 00:07:36.756 02:55:31 -- common/autotest_common.sh@819 -- # '[' -z 1897053 ']' 00:07:36.756 02:55:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.756 02:55:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.756 02:55:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.756 02:55:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.756 02:55:31 -- common/autotest_common.sh@10 -- # set +x 00:07:37.016 [2024-07-14 02:55:32.033732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:37.016 [2024-07-14 02:55:32.033817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1897053 ] 00:07:37.016 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.016 [2024-07-14 02:55:32.099589] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.016 [2024-07-14 02:55:32.191136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.016 [2024-07-14 02:55:32.191321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.951 02:55:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.951 02:55:32 -- common/autotest_common.sh@852 -- # return 0 00:07:37.951 02:55:32 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.951 { 00:07:37.951 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:37.951 "fields": { 00:07:37.951 "major": 24, 00:07:37.951 "minor": 1, 00:07:37.951 "patch": 1, 00:07:37.951 "suffix": "-pre", 00:07:37.951 "commit": "4b94202c6" 00:07:37.951 } 00:07:37.951 } 00:07:37.951 02:55:33 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.951 02:55:33 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.951 02:55:33 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.951 02:55:33 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.951 02:55:33 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.951 02:55:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.951 02:55:33 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.951 02:55:33 -- common/autotest_common.sh@10 -- # set +x 00:07:37.951 02:55:33 -- app/cmdline.sh@26 -- # sort 00:07:37.951 02:55:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:38.208 02:55:33 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.208 02:55:33 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.208 02:55:33 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.208 02:55:33 -- common/autotest_common.sh@640 -- # local es=0 00:07:38.208 02:55:33 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.208 02:55:33 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.208 02:55:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:38.208 02:55:33 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.208 02:55:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:38.208 02:55:33 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.208 02:55:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:38.208 02:55:33 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:38.208 02:55:33 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:38.208 02:55:33 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.208 request: 00:07:38.208 { 00:07:38.208 "method": "env_dpdk_get_mem_stats", 00:07:38.208 "req_id": 1 00:07:38.208 } 00:07:38.208 Got JSON-RPC error response 00:07:38.208 response: 00:07:38.208 { 00:07:38.208 "code": -32601, 00:07:38.208 "message": "Method not found" 00:07:38.208 } 00:07:38.208 02:55:33 -- common/autotest_common.sh@643 -- # es=1 00:07:38.208 02:55:33 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:38.208 02:55:33 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:38.208 02:55:33 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:38.208 02:55:33 -- app/cmdline.sh@1 -- # killprocess 1897053 00:07:38.208 02:55:33 -- common/autotest_common.sh@926 -- # '[' -z 1897053 ']' 00:07:38.208 02:55:33 -- common/autotest_common.sh@930 -- # kill -0 1897053 00:07:38.208 02:55:33 -- common/autotest_common.sh@931 -- # uname 00:07:38.208 02:55:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:38.208 02:55:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1897053 00:07:38.468 02:55:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:38.468 02:55:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:38.468 02:55:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1897053' 00:07:38.468 killing process with pid 1897053 00:07:38.468 02:55:33 -- common/autotest_common.sh@945 -- # kill 1897053 00:07:38.468 02:55:33 -- common/autotest_common.sh@950 -- # wait 1897053 00:07:38.727 00:07:38.728 real 0m1.961s 00:07:38.728 user 0m2.414s 00:07:38.728 sys 0m0.499s 00:07:38.728 02:55:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.728 02:55:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.728 ************************************ 00:07:38.728 END TEST app_cmdline 00:07:38.728 ************************************ 00:07:38.728 02:55:33 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.728 02:55:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.728 02:55:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.728 02:55:33 -- common/autotest_common.sh@10 -- # set +x 00:07:38.728 ************************************ 00:07:38.728 START TEST version 00:07:38.728 ************************************ 00:07:38.728 02:55:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:38.728 * Looking for test storage... 00:07:38.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:38.728 02:55:33 -- app/version.sh@17 -- # get_header_version major 00:07:38.728 02:55:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.728 02:55:33 -- app/version.sh@14 -- # cut -f2 00:07:38.728 02:55:33 -- app/version.sh@14 -- # tr -d '"' 00:07:38.728 02:55:33 -- app/version.sh@17 -- # major=24 00:07:38.728 02:55:33 -- app/version.sh@18 -- # get_header_version minor 00:07:38.728 02:55:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.728 02:55:33 -- app/version.sh@14 -- # cut -f2 00:07:38.728 02:55:33 -- app/version.sh@14 -- # tr -d '"' 00:07:38.728 02:55:33 -- app/version.sh@18 -- # minor=1 00:07:38.728 02:55:33 -- app/version.sh@19 -- # get_header_version patch 00:07:38.728 02:55:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.728 02:55:33 -- app/version.sh@14 -- # cut -f2 00:07:38.728 02:55:33 -- app/version.sh@14 -- # tr -d '"' 00:07:38.987 02:55:33 -- app/version.sh@19 -- # patch=1 00:07:38.987 02:55:33 -- app/version.sh@20 -- # get_header_version suffix 00:07:38.987 02:55:33 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:38.987 02:55:33 -- app/version.sh@14 -- # cut -f2 00:07:38.987 02:55:33 -- app/version.sh@14 -- # tr -d '"' 00:07:38.987 02:55:33 -- app/version.sh@20 -- # suffix=-pre 00:07:38.987 02:55:33 -- app/version.sh@22 -- # version=24.1 00:07:38.987 02:55:33 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.987 02:55:33 -- app/version.sh@25 -- # version=24.1.1 00:07:38.987 02:55:33 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:38.987 02:55:33 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:38.987 02:55:33 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.987 02:55:34 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:38.987 02:55:34 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:38.987 00:07:38.987 real 0m0.102s 00:07:38.987 user 0m0.050s 00:07:38.987 sys 0m0.073s 00:07:38.987 02:55:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.987 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.987 ************************************ 00:07:38.987 END TEST version 00:07:38.987 ************************************ 00:07:38.987 02:55:34 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@204 -- # uname -s 00:07:38.987 02:55:34 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:38.987 02:55:34 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:38.987 02:55:34 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:38.987 02:55:34 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:38.987 02:55:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:38.987 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.987 02:55:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:38.987 02:55:34 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:38.987 02:55:34 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.987 02:55:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:38.987 02:55:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.987 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.987 ************************************ 00:07:38.987 START TEST nvmf_tcp 00:07:38.987 ************************************ 00:07:38.987 02:55:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:38.987 * Looking for test storage... 00:07:38.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.988 02:55:34 -- nvmf/common.sh@7 -- # uname -s 00:07:38.988 02:55:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.988 02:55:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.988 02:55:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.988 02:55:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.988 02:55:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.988 02:55:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.988 02:55:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.988 02:55:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.988 02:55:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.988 02:55:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.988 02:55:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.988 02:55:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.988 02:55:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.988 02:55:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.988 02:55:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.988 02:55:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.988 02:55:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.988 02:55:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.988 02:55:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.988 02:55:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@5 -- # export PATH 00:07:38.988 02:55:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- nvmf/common.sh@46 -- # : 0 00:07:38.988 02:55:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.988 02:55:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.988 02:55:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.988 02:55:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.988 02:55:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:38.988 02:55:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.988 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:38.988 02:55:34 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:38.988 02:55:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:38.988 02:55:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.988 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 ************************************ 00:07:38.988 START TEST nvmf_example 00:07:38.988 ************************************ 00:07:38.988 02:55:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:38.988 * Looking for test storage... 00:07:38.988 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:38.988 02:55:34 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:38.988 02:55:34 -- nvmf/common.sh@7 -- # uname -s 00:07:38.988 02:55:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:38.988 02:55:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:38.988 02:55:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:38.988 02:55:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:38.988 02:55:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:38.988 02:55:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:38.988 02:55:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:38.988 02:55:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:38.988 02:55:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:38.988 02:55:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:38.988 02:55:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.988 02:55:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:38.988 02:55:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:38.988 02:55:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:38.988 02:55:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:38.988 02:55:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:38.988 02:55:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:38.988 02:55:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:38.988 02:55:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:38.988 02:55:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- paths/export.sh@5 -- # export PATH 00:07:38.988 02:55:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:38.988 02:55:34 -- nvmf/common.sh@46 -- # : 0 00:07:38.988 02:55:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:38.988 02:55:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:38.988 02:55:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:38.988 02:55:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:38.988 02:55:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:38.988 02:55:34 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:38.988 02:55:34 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:38.988 02:55:34 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:38.988 02:55:34 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:38.988 02:55:34 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:38.988 02:55:34 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:38.988 02:55:34 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:38.988 02:55:34 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:38.988 02:55:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:38.988 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:38.988 02:55:34 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:38.988 02:55:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:38.988 02:55:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:38.988 02:55:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:38.988 02:55:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:38.988 02:55:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:38.988 02:55:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:38.988 02:55:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:38.988 02:55:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.988 02:55:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:38.988 02:55:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:38.988 02:55:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:38.988 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:07:41.523 02:55:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:41.524 02:55:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:41.524 02:55:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:41.524 02:55:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:41.524 02:55:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:41.524 02:55:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:41.524 02:55:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:41.524 02:55:36 -- nvmf/common.sh@294 -- # net_devs=() 00:07:41.524 02:55:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:41.524 02:55:36 -- nvmf/common.sh@295 -- # e810=() 00:07:41.524 02:55:36 -- nvmf/common.sh@295 -- # local -ga e810 00:07:41.524 02:55:36 -- nvmf/common.sh@296 -- # x722=() 00:07:41.524 02:55:36 -- nvmf/common.sh@296 -- # local -ga x722 00:07:41.524 02:55:36 -- nvmf/common.sh@297 -- # mlx=() 00:07:41.524 02:55:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:41.524 02:55:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.524 02:55:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:41.524 02:55:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.524 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.524 02:55:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:41.524 02:55:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.524 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.524 02:55:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:41.524 02:55:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.524 02:55:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.524 02:55:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.524 02:55:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:41.524 02:55:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.524 02:55:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.524 02:55:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.524 02:55:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:41.524 02:55:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:41.524 02:55:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.524 02:55:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.524 02:55:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:41.524 02:55:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.524 02:55:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.524 02:55:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:41.524 02:55:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.524 02:55:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.524 02:55:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:41.524 02:55:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:41.524 02:55:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.524 02:55:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.524 02:55:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.524 02:55:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.524 02:55:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:41.524 02:55:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.524 02:55:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.524 02:55:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.524 02:55:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:41.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:07:41.524 00:07:41.524 --- 10.0.0.2 ping statistics --- 00:07:41.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.524 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:07:41.524 02:55:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:41.524 00:07:41.524 --- 10.0.0.1 ping statistics --- 00:07:41.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.524 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:41.524 02:55:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.524 02:55:36 -- nvmf/common.sh@410 -- # return 0 00:07:41.524 02:55:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:41.524 02:55:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.524 02:55:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:41.524 02:55:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.524 02:55:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:41.524 02:55:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:41.524 02:55:36 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:41.524 02:55:36 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:41.524 02:55:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:41.524 02:55:36 -- common/autotest_common.sh@10 -- # set +x 00:07:41.524 02:55:36 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:41.524 02:55:36 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:41.524 02:55:36 -- target/nvmf_example.sh@34 -- # nvmfpid=1899083 00:07:41.524 02:55:36 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:41.524 02:55:36 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.524 02:55:36 -- target/nvmf_example.sh@36 -- # waitforlisten 1899083 00:07:41.524 02:55:36 -- common/autotest_common.sh@819 -- # '[' -z 1899083 ']' 00:07:41.524 02:55:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.524 02:55:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.524 02:55:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.524 02:55:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.524 02:55:36 -- common/autotest_common.sh@10 -- # set +x 00:07:41.524 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.089 02:55:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.089 02:55:37 -- common/autotest_common.sh@852 -- # return 0 00:07:42.089 02:55:37 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:42.089 02:55:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:42.089 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:42.347 02:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.347 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.347 02:55:37 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:42.347 02:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.347 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.347 02:55:37 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:42.347 02:55:37 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:42.347 02:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.347 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.347 02:55:37 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:42.347 02:55:37 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:42.347 02:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.347 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.347 02:55:37 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.347 02:55:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.347 02:55:37 -- common/autotest_common.sh@10 -- # set +x 00:07:42.347 02:55:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.347 02:55:37 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:42.347 02:55:37 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:42.347 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.545 Initializing NVMe Controllers 00:07:54.545 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.545 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:54.545 Initialization complete. Launching workers. 00:07:54.545 ======================================================== 00:07:54.545 Latency(us) 00:07:54.545 Device Information : IOPS MiB/s Average min max 00:07:54.545 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15536.20 60.69 4120.80 861.32 16354.21 00:07:54.545 ======================================================== 00:07:54.545 Total : 15536.20 60.69 4120.80 861.32 16354.21 00:07:54.545 00:07:54.545 02:55:47 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:54.545 02:55:47 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:54.545 02:55:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:54.545 02:55:47 -- nvmf/common.sh@116 -- # sync 00:07:54.545 02:55:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:54.545 02:55:47 -- nvmf/common.sh@119 -- # set +e 00:07:54.545 02:55:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:54.545 02:55:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:54.545 rmmod nvme_tcp 00:07:54.545 rmmod nvme_fabrics 00:07:54.545 rmmod nvme_keyring 00:07:54.545 02:55:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:54.545 02:55:47 -- nvmf/common.sh@123 -- # set -e 00:07:54.545 02:55:47 -- nvmf/common.sh@124 -- # return 0 00:07:54.545 02:55:47 -- nvmf/common.sh@477 -- # '[' -n 1899083 ']' 00:07:54.545 02:55:47 -- nvmf/common.sh@478 -- # killprocess 1899083 00:07:54.545 02:55:47 -- common/autotest_common.sh@926 -- # '[' -z 1899083 ']' 00:07:54.545 02:55:47 -- common/autotest_common.sh@930 -- # kill -0 1899083 00:07:54.545 02:55:47 -- common/autotest_common.sh@931 -- # uname 00:07:54.545 02:55:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:54.545 02:55:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1899083 00:07:54.545 02:55:47 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:07:54.545 02:55:47 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:07:54.545 02:55:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1899083' 00:07:54.545 killing process with pid 1899083 00:07:54.545 02:55:47 -- common/autotest_common.sh@945 -- # kill 1899083 00:07:54.545 02:55:47 -- common/autotest_common.sh@950 -- # wait 1899083 00:07:54.545 nvmf threads initialize successfully 00:07:54.545 bdev subsystem init successfully 00:07:54.545 created a nvmf target service 00:07:54.545 create targets's poll groups done 00:07:54.545 all subsystems of target started 00:07:54.545 nvmf target is running 00:07:54.545 all subsystems of target stopped 00:07:54.545 destroy targets's poll groups done 00:07:54.545 destroyed the nvmf target service 00:07:54.545 bdev subsystem finish successfully 00:07:54.545 nvmf threads destroy successfully 00:07:54.545 02:55:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:54.545 02:55:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:54.545 02:55:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:54.545 02:55:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:54.545 02:55:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:54.545 02:55:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.545 02:55:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:54.545 02:55:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.804 02:55:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:54.804 02:55:49 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:54.804 02:55:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:54.804 02:55:49 -- common/autotest_common.sh@10 -- # set +x 00:07:54.804 00:07:54.804 real 0m15.877s 00:07:54.804 user 0m45.044s 00:07:54.804 sys 0m3.188s 00:07:54.804 02:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.804 02:55:50 -- common/autotest_common.sh@10 -- # set +x 00:07:54.804 ************************************ 00:07:54.804 END TEST nvmf_example 00:07:54.804 ************************************ 00:07:54.804 02:55:50 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:54.804 02:55:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:54.804 02:55:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.804 02:55:50 -- common/autotest_common.sh@10 -- # set +x 00:07:54.804 ************************************ 00:07:54.804 START TEST nvmf_filesystem 00:07:54.804 ************************************ 00:07:54.804 02:55:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:55.091 * Looking for test storage... 00:07:55.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.091 02:55:50 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:55.091 02:55:50 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:55.091 02:55:50 -- common/autotest_common.sh@34 -- # set -e 00:07:55.091 02:55:50 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:55.091 02:55:50 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:55.091 02:55:50 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:55.091 02:55:50 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:55.091 02:55:50 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:55.091 02:55:50 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:55.091 02:55:50 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:55.091 02:55:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:55.091 02:55:50 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:55.091 02:55:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:55.091 02:55:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:55.091 02:55:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:55.091 02:55:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:55.091 02:55:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:55.091 02:55:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:55.091 02:55:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:55.091 02:55:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:55.091 02:55:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:55.091 02:55:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:55.091 02:55:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:55.091 02:55:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:55.091 02:55:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:55.091 02:55:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:55.091 02:55:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:55.091 02:55:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:55.091 02:55:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:55.091 02:55:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:55.091 02:55:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:55.091 02:55:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:55.091 02:55:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:55.091 02:55:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:55.091 02:55:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:55.091 02:55:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:55.091 02:55:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:55.091 02:55:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:55.091 02:55:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:55.091 02:55:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:55.091 02:55:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:55.091 02:55:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:55.091 02:55:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.091 02:55:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:55.091 02:55:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:55.091 02:55:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:55.091 02:55:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:55.091 02:55:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:55.091 02:55:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:55.091 02:55:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:55.091 02:55:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:55.091 02:55:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:55.091 02:55:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:55.091 02:55:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:55.091 02:55:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:55.091 02:55:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:55.091 02:55:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:55.091 02:55:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:07:55.091 02:55:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:55.091 02:55:50 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:07:55.091 02:55:50 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:55.091 02:55:50 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:55.091 02:55:50 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:55.091 02:55:50 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:55.091 02:55:50 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:55.091 02:55:50 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:55.091 02:55:50 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:55.091 02:55:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.091 02:55:50 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:55.091 02:55:50 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:55.091 02:55:50 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:55.091 02:55:50 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:55.091 02:55:50 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:55.091 02:55:50 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:55.091 02:55:50 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:55.091 02:55:50 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:55.091 02:55:50 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:55.091 02:55:50 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:55.091 02:55:50 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:55.091 02:55:50 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:55.091 02:55:50 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:55.091 02:55:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:55.091 02:55:50 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:55.091 02:55:50 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:55.091 02:55:50 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:55.091 02:55:50 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:07:55.091 02:55:50 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:55.091 02:55:50 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:55.091 02:55:50 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:55.091 02:55:50 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:55.091 02:55:50 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:55.091 02:55:50 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.091 02:55:50 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:55.092 02:55:50 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.092 02:55:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:55.092 02:55:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:55.092 02:55:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:55.092 02:55:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:55.092 02:55:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:55.092 02:55:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:55.092 02:55:50 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:55.092 02:55:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:55.092 #define SPDK_CONFIG_H 00:07:55.092 #define SPDK_CONFIG_APPS 1 00:07:55.092 #define SPDK_CONFIG_ARCH native 00:07:55.092 #undef SPDK_CONFIG_ASAN 00:07:55.092 #undef SPDK_CONFIG_AVAHI 00:07:55.092 #undef SPDK_CONFIG_CET 00:07:55.092 #define SPDK_CONFIG_COVERAGE 1 00:07:55.092 #define SPDK_CONFIG_CROSS_PREFIX 00:07:55.092 #undef SPDK_CONFIG_CRYPTO 00:07:55.092 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:55.092 #undef SPDK_CONFIG_CUSTOMOCF 00:07:55.092 #undef SPDK_CONFIG_DAOS 00:07:55.092 #define SPDK_CONFIG_DAOS_DIR 00:07:55.092 #define SPDK_CONFIG_DEBUG 1 00:07:55.092 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:55.092 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.092 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:55.092 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.092 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:55.092 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:55.092 #define SPDK_CONFIG_EXAMPLES 1 00:07:55.092 #undef SPDK_CONFIG_FC 00:07:55.092 #define SPDK_CONFIG_FC_PATH 00:07:55.092 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:55.092 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:55.092 #undef SPDK_CONFIG_FUSE 00:07:55.092 #undef SPDK_CONFIG_FUZZER 00:07:55.092 #define SPDK_CONFIG_FUZZER_LIB 00:07:55.092 #undef SPDK_CONFIG_GOLANG 00:07:55.092 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:55.092 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:55.092 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:55.092 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:55.092 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:55.092 #define SPDK_CONFIG_IDXD 1 00:07:55.092 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:55.092 #undef SPDK_CONFIG_IPSEC_MB 00:07:55.092 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:55.092 #define SPDK_CONFIG_ISAL 1 00:07:55.092 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:55.092 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:55.092 #define SPDK_CONFIG_LIBDIR 00:07:55.092 #undef SPDK_CONFIG_LTO 00:07:55.092 #define SPDK_CONFIG_MAX_LCORES 00:07:55.092 #define SPDK_CONFIG_NVME_CUSE 1 00:07:55.092 #undef SPDK_CONFIG_OCF 00:07:55.092 #define SPDK_CONFIG_OCF_PATH 00:07:55.092 #define SPDK_CONFIG_OPENSSL_PATH 00:07:55.092 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:55.092 #undef SPDK_CONFIG_PGO_USE 00:07:55.092 #define SPDK_CONFIG_PREFIX /usr/local 00:07:55.092 #undef SPDK_CONFIG_RAID5F 00:07:55.092 #undef SPDK_CONFIG_RBD 00:07:55.092 #define SPDK_CONFIG_RDMA 1 00:07:55.092 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:55.092 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:55.092 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:55.092 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:55.092 #define SPDK_CONFIG_SHARED 1 00:07:55.092 #undef SPDK_CONFIG_SMA 00:07:55.092 #define SPDK_CONFIG_TESTS 1 00:07:55.092 #undef SPDK_CONFIG_TSAN 00:07:55.092 #define SPDK_CONFIG_UBLK 1 00:07:55.092 #define SPDK_CONFIG_UBSAN 1 00:07:55.092 #undef SPDK_CONFIG_UNIT_TESTS 00:07:55.092 #undef SPDK_CONFIG_URING 00:07:55.092 #define SPDK_CONFIG_URING_PATH 00:07:55.092 #undef SPDK_CONFIG_URING_ZNS 00:07:55.092 #undef SPDK_CONFIG_USDT 00:07:55.092 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:55.092 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:55.092 #define SPDK_CONFIG_VFIO_USER 1 00:07:55.092 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:55.092 #define SPDK_CONFIG_VHOST 1 00:07:55.092 #define SPDK_CONFIG_VIRTIO 1 00:07:55.092 #undef SPDK_CONFIG_VTUNE 00:07:55.092 #define SPDK_CONFIG_VTUNE_DIR 00:07:55.092 #define SPDK_CONFIG_WERROR 1 00:07:55.092 #define SPDK_CONFIG_WPDK_DIR 00:07:55.092 #undef SPDK_CONFIG_XNVME 00:07:55.092 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:55.092 02:55:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:55.092 02:55:50 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.092 02:55:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.092 02:55:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.092 02:55:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.092 02:55:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.092 02:55:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.092 02:55:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.092 02:55:50 -- paths/export.sh@5 -- # export PATH 00:07:55.092 02:55:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.092 02:55:50 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.092 02:55:50 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:55.092 02:55:50 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:55.092 02:55:50 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:55.092 02:55:50 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:55.092 02:55:50 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:55.092 02:55:50 -- pm/common@16 -- # TEST_TAG=N/A 00:07:55.092 02:55:50 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:55.092 02:55:50 -- common/autotest_common.sh@52 -- # : 1 00:07:55.092 02:55:50 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:07:55.092 02:55:50 -- common/autotest_common.sh@56 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:55.092 02:55:50 -- common/autotest_common.sh@58 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:07:55.092 02:55:50 -- common/autotest_common.sh@60 -- # : 1 00:07:55.092 02:55:50 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:55.092 02:55:50 -- common/autotest_common.sh@62 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:07:55.092 02:55:50 -- common/autotest_common.sh@64 -- # : 00:07:55.092 02:55:50 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:07:55.092 02:55:50 -- common/autotest_common.sh@66 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:07:55.092 02:55:50 -- common/autotest_common.sh@68 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:07:55.092 02:55:50 -- common/autotest_common.sh@70 -- # : 0 00:07:55.092 02:55:50 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:07:55.093 02:55:50 -- common/autotest_common.sh@72 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:55.093 02:55:50 -- common/autotest_common.sh@74 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:07:55.093 02:55:50 -- common/autotest_common.sh@76 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:07:55.093 02:55:50 -- common/autotest_common.sh@78 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:07:55.093 02:55:50 -- common/autotest_common.sh@80 -- # : 1 00:07:55.093 02:55:50 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:07:55.093 02:55:50 -- common/autotest_common.sh@82 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:07:55.093 02:55:50 -- common/autotest_common.sh@84 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:07:55.093 02:55:50 -- common/autotest_common.sh@86 -- # : 1 00:07:55.093 02:55:50 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:07:55.093 02:55:50 -- common/autotest_common.sh@88 -- # : 1 00:07:55.093 02:55:50 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:07:55.093 02:55:50 -- common/autotest_common.sh@90 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:55.093 02:55:50 -- common/autotest_common.sh@92 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:07:55.093 02:55:50 -- common/autotest_common.sh@94 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:07:55.093 02:55:50 -- common/autotest_common.sh@96 -- # : tcp 00:07:55.093 02:55:50 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:55.093 02:55:50 -- common/autotest_common.sh@98 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:07:55.093 02:55:50 -- common/autotest_common.sh@100 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:07:55.093 02:55:50 -- common/autotest_common.sh@102 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:07:55.093 02:55:50 -- common/autotest_common.sh@104 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:07:55.093 02:55:50 -- common/autotest_common.sh@106 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:07:55.093 02:55:50 -- common/autotest_common.sh@108 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:07:55.093 02:55:50 -- common/autotest_common.sh@110 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:07:55.093 02:55:50 -- common/autotest_common.sh@112 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:55.093 02:55:50 -- common/autotest_common.sh@114 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:07:55.093 02:55:50 -- common/autotest_common.sh@116 -- # : 1 00:07:55.093 02:55:50 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:07:55.093 02:55:50 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:55.093 02:55:50 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:55.093 02:55:50 -- common/autotest_common.sh@120 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:07:55.093 02:55:50 -- common/autotest_common.sh@122 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:07:55.093 02:55:50 -- common/autotest_common.sh@124 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:07:55.093 02:55:50 -- common/autotest_common.sh@126 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:07:55.093 02:55:50 -- common/autotest_common.sh@128 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:07:55.093 02:55:50 -- common/autotest_common.sh@130 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:07:55.093 02:55:50 -- common/autotest_common.sh@132 -- # : v22.11.4 00:07:55.093 02:55:50 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:07:55.093 02:55:50 -- common/autotest_common.sh@134 -- # : true 00:07:55.093 02:55:50 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:07:55.093 02:55:50 -- common/autotest_common.sh@136 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:07:55.093 02:55:50 -- common/autotest_common.sh@138 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:07:55.093 02:55:50 -- common/autotest_common.sh@140 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:07:55.093 02:55:50 -- common/autotest_common.sh@142 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:07:55.093 02:55:50 -- common/autotest_common.sh@144 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:07:55.093 02:55:50 -- common/autotest_common.sh@146 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:07:55.093 02:55:50 -- common/autotest_common.sh@148 -- # : e810 00:07:55.093 02:55:50 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:07:55.093 02:55:50 -- common/autotest_common.sh@150 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:07:55.093 02:55:50 -- common/autotest_common.sh@152 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:07:55.093 02:55:50 -- common/autotest_common.sh@154 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:07:55.093 02:55:50 -- common/autotest_common.sh@156 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:07:55.093 02:55:50 -- common/autotest_common.sh@158 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:07:55.093 02:55:50 -- common/autotest_common.sh@160 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:07:55.093 02:55:50 -- common/autotest_common.sh@163 -- # : 00:07:55.093 02:55:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:07:55.093 02:55:50 -- common/autotest_common.sh@165 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:07:55.093 02:55:50 -- common/autotest_common.sh@167 -- # : 0 00:07:55.093 02:55:50 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:55.093 02:55:50 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.093 02:55:50 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.094 02:55:50 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:55.094 02:55:50 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:55.094 02:55:50 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:55.094 02:55:50 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:55.094 02:55:50 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:07:55.094 02:55:50 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.094 02:55:50 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:55.094 02:55:50 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.094 02:55:50 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:55.094 02:55:50 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:55.094 02:55:50 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:07:55.094 02:55:50 -- common/autotest_common.sh@196 -- # cat 00:07:55.094 02:55:50 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:07:55.094 02:55:50 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.094 02:55:50 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:55.094 02:55:50 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.094 02:55:50 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:55.094 02:55:50 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:07:55.094 02:55:50 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:07:55.094 02:55:50 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.094 02:55:50 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:55.094 02:55:50 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.094 02:55:50 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:55.094 02:55:50 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.094 02:55:50 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:55.094 02:55:50 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.094 02:55:50 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:55.094 02:55:50 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.094 02:55:50 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:55.094 02:55:50 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:07:55.094 02:55:50 -- common/autotest_common.sh@249 -- # export valgrind= 00:07:55.094 02:55:50 -- common/autotest_common.sh@249 -- # valgrind= 00:07:55.094 02:55:50 -- common/autotest_common.sh@255 -- # uname -s 00:07:55.094 02:55:50 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:07:55.094 02:55:50 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:07:55.094 02:55:50 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:07:55.094 02:55:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@265 -- # MAKE=make 00:07:55.094 02:55:50 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:07:55.094 02:55:50 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:07:55.094 02:55:50 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:07:55.094 02:55:50 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:55.094 02:55:50 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:07:55.094 02:55:50 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:07:55.094 02:55:50 -- common/autotest_common.sh@291 -- # for i in "$@" 00:07:55.094 02:55:50 -- common/autotest_common.sh@292 -- # case "$i" in 00:07:55.094 02:55:50 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:07:55.094 02:55:50 -- common/autotest_common.sh@309 -- # [[ -z 1900834 ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@309 -- # kill -0 1900834 00:07:55.094 02:55:50 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:07:55.094 02:55:50 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:07:55.094 02:55:50 -- common/autotest_common.sh@322 -- # local mount target_dir 00:07:55.094 02:55:50 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:07:55.094 02:55:50 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:07:55.094 02:55:50 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:07:55.094 02:55:50 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:07:55.094 02:55:50 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.wKNWRq 00:07:55.094 02:55:50 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:55.094 02:55:50 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:07:55.094 02:55:50 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.wKNWRq/tests/target /tmp/spdk.wKNWRq 00:07:55.094 02:55:50 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:07:55.094 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.094 02:55:50 -- common/autotest_common.sh@318 -- # df -T 00:07:55.094 02:55:50 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:07:55.094 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:07:55.094 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:07:55.094 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:07:55.094 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:07:55.094 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:07:55.094 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:07:55.094 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:07:55.094 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:07:55.094 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:07:55.094 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=53523415040 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994708992 00:07:55.095 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=8471293952 00:07:55.095 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943834112 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997352448 00:07:55.095 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:07:55.095 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390182912 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:07:55.095 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:07:55.095 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996176896 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997356544 00:07:55.095 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=1179648 00:07:55.095 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199463936 00:07:55.095 02:55:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199468032 00:07:55.095 02:55:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:07:55.095 02:55:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:07:55.095 02:55:50 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:07:55.095 * Looking for test storage... 00:07:55.095 02:55:50 -- common/autotest_common.sh@359 -- # local target_space new_size 00:07:55.095 02:55:50 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:07:55.095 02:55:50 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.095 02:55:50 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:55.095 02:55:50 -- common/autotest_common.sh@363 -- # mount=/ 00:07:55.095 02:55:50 -- common/autotest_common.sh@365 -- # target_space=53523415040 00:07:55.095 02:55:50 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:07:55.095 02:55:50 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:07:55.095 02:55:50 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:07:55.095 02:55:50 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:07:55.095 02:55:50 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:07:55.095 02:55:50 -- common/autotest_common.sh@372 -- # new_size=10685886464 00:07:55.095 02:55:50 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:55.095 02:55:50 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.095 02:55:50 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.095 02:55:50 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.095 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.095 02:55:50 -- common/autotest_common.sh@380 -- # return 0 00:07:55.095 02:55:50 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:07:55.095 02:55:50 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:07:55.095 02:55:50 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:55.095 02:55:50 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:55.095 02:55:50 -- common/autotest_common.sh@1672 -- # true 00:07:55.095 02:55:50 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:07:55.095 02:55:50 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:55.095 02:55:50 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:55.095 02:55:50 -- common/autotest_common.sh@27 -- # exec 00:07:55.095 02:55:50 -- common/autotest_common.sh@29 -- # exec 00:07:55.095 02:55:50 -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:55.095 02:55:50 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:55.095 02:55:50 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:55.095 02:55:50 -- common/autotest_common.sh@18 -- # set -x 00:07:55.095 02:55:50 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.095 02:55:50 -- nvmf/common.sh@7 -- # uname -s 00:07:55.095 02:55:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.095 02:55:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.095 02:55:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.095 02:55:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.095 02:55:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.095 02:55:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.095 02:55:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.095 02:55:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.095 02:55:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.095 02:55:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.095 02:55:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.095 02:55:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.095 02:55:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.095 02:55:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.095 02:55:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.095 02:55:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.095 02:55:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.095 02:55:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.095 02:55:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.095 02:55:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.095 02:55:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.095 02:55:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.095 02:55:50 -- paths/export.sh@5 -- # export PATH 00:07:55.095 02:55:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.095 02:55:50 -- nvmf/common.sh@46 -- # : 0 00:07:55.095 02:55:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:55.095 02:55:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:55.095 02:55:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:55.095 02:55:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.095 02:55:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.095 02:55:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:55.095 02:55:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:55.095 02:55:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:55.095 02:55:50 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:55.095 02:55:50 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:55.095 02:55:50 -- target/filesystem.sh@15 -- # nvmftestinit 00:07:55.095 02:55:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:55.096 02:55:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.096 02:55:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:55.096 02:55:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:55.096 02:55:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:55.096 02:55:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.096 02:55:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.096 02:55:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.096 02:55:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:55.096 02:55:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:55.096 02:55:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:55.096 02:55:50 -- common/autotest_common.sh@10 -- # set +x 00:07:56.997 02:55:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:56.997 02:55:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:56.997 02:55:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:56.997 02:55:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:56.997 02:55:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:56.997 02:55:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:56.997 02:55:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:56.997 02:55:52 -- nvmf/common.sh@294 -- # net_devs=() 00:07:56.997 02:55:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:56.997 02:55:52 -- nvmf/common.sh@295 -- # e810=() 00:07:56.997 02:55:52 -- nvmf/common.sh@295 -- # local -ga e810 00:07:56.997 02:55:52 -- nvmf/common.sh@296 -- # x722=() 00:07:56.997 02:55:52 -- nvmf/common.sh@296 -- # local -ga x722 00:07:56.997 02:55:52 -- nvmf/common.sh@297 -- # mlx=() 00:07:56.997 02:55:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:56.997 02:55:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.997 02:55:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:56.997 02:55:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:56.997 02:55:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.997 02:55:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:56.997 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:56.997 02:55:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:56.997 02:55:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:56.997 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:56.997 02:55:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.997 02:55:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.997 02:55:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.997 02:55:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:56.997 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:56.997 02:55:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.997 02:55:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:56.997 02:55:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.997 02:55:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.997 02:55:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:56.997 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:56.997 02:55:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.997 02:55:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:56.997 02:55:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:56.997 02:55:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:56.997 02:55:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.997 02:55:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.997 02:55:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.997 02:55:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:56.997 02:55:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.997 02:55:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.997 02:55:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:56.997 02:55:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.997 02:55:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.997 02:55:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:56.997 02:55:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:56.997 02:55:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.997 02:55:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.256 02:55:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.256 02:55:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.256 02:55:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:57.256 02:55:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.256 02:55:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.256 02:55:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.256 02:55:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:57.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:07:57.256 00:07:57.256 --- 10.0.0.2 ping statistics --- 00:07:57.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.256 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:07:57.256 02:55:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:07:57.256 00:07:57.256 --- 10.0.0.1 ping statistics --- 00:07:57.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.256 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:07:57.256 02:55:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.256 02:55:52 -- nvmf/common.sh@410 -- # return 0 00:07:57.256 02:55:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:57.256 02:55:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.256 02:55:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:57.256 02:55:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:57.256 02:55:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.256 02:55:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:57.256 02:55:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:57.256 02:55:52 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:57.256 02:55:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:57.256 02:55:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.256 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.256 ************************************ 00:07:57.256 START TEST nvmf_filesystem_no_in_capsule 00:07:57.256 ************************************ 00:07:57.256 02:55:52 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:57.256 02:55:52 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:57.256 02:55:52 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:57.256 02:55:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:57.256 02:55:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:57.256 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.256 02:55:52 -- nvmf/common.sh@469 -- # nvmfpid=1902461 00:07:57.256 02:55:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.256 02:55:52 -- nvmf/common.sh@470 -- # waitforlisten 1902461 00:07:57.256 02:55:52 -- common/autotest_common.sh@819 -- # '[' -z 1902461 ']' 00:07:57.256 02:55:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.256 02:55:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:57.256 02:55:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.256 02:55:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:57.256 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:07:57.256 [2024-07-14 02:55:52.446586] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:57.256 [2024-07-14 02:55:52.446667] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.256 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.514 [2024-07-14 02:55:52.520934] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.514 [2024-07-14 02:55:52.617750] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:57.514 [2024-07-14 02:55:52.617914] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.514 [2024-07-14 02:55:52.617940] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.514 [2024-07-14 02:55:52.617954] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.514 [2024-07-14 02:55:52.618011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.514 [2024-07-14 02:55:52.618078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.514 [2024-07-14 02:55:52.618131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.514 [2024-07-14 02:55:52.618134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.448 02:55:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.448 02:55:53 -- common/autotest_common.sh@852 -- # return 0 00:07:58.448 02:55:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:58.448 02:55:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 02:55:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:58.448 02:55:53 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:58.448 02:55:53 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 [2024-07-14 02:55:53.380315] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 [2024-07-14 02:55:53.568107] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:58.448 02:55:53 -- common/autotest_common.sh@1359 -- # local bs 00:07:58.448 02:55:53 -- common/autotest_common.sh@1360 -- # local nb 00:07:58.448 02:55:53 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:58.448 02:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.448 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:07:58.448 02:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.448 02:55:53 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:58.448 { 00:07:58.448 "name": "Malloc1", 00:07:58.448 "aliases": [ 00:07:58.448 "88ab21a9-8115-4eea-964a-7d2778d8cfeb" 00:07:58.448 ], 00:07:58.448 "product_name": "Malloc disk", 00:07:58.448 "block_size": 512, 00:07:58.448 "num_blocks": 1048576, 00:07:58.448 "uuid": "88ab21a9-8115-4eea-964a-7d2778d8cfeb", 00:07:58.448 "assigned_rate_limits": { 00:07:58.448 "rw_ios_per_sec": 0, 00:07:58.448 "rw_mbytes_per_sec": 0, 00:07:58.448 "r_mbytes_per_sec": 0, 00:07:58.448 "w_mbytes_per_sec": 0 00:07:58.448 }, 00:07:58.448 "claimed": true, 00:07:58.448 "claim_type": "exclusive_write", 00:07:58.448 "zoned": false, 00:07:58.448 "supported_io_types": { 00:07:58.448 "read": true, 00:07:58.448 "write": true, 00:07:58.448 "unmap": true, 00:07:58.448 "write_zeroes": true, 00:07:58.448 "flush": true, 00:07:58.448 "reset": true, 00:07:58.448 "compare": false, 00:07:58.448 "compare_and_write": false, 00:07:58.448 "abort": true, 00:07:58.448 "nvme_admin": false, 00:07:58.448 "nvme_io": false 00:07:58.448 }, 00:07:58.448 "memory_domains": [ 00:07:58.448 { 00:07:58.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.448 "dma_device_type": 2 00:07:58.448 } 00:07:58.448 ], 00:07:58.448 "driver_specific": {} 00:07:58.448 } 00:07:58.448 ]' 00:07:58.448 02:55:53 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:58.448 02:55:53 -- common/autotest_common.sh@1362 -- # bs=512 00:07:58.448 02:55:53 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:58.448 02:55:53 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:58.448 02:55:53 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:58.448 02:55:53 -- common/autotest_common.sh@1367 -- # echo 512 00:07:58.448 02:55:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:58.448 02:55:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:59.381 02:55:54 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:59.381 02:55:54 -- common/autotest_common.sh@1177 -- # local i=0 00:07:59.381 02:55:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:59.381 02:55:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:59.381 02:55:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:01.277 02:55:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:01.277 02:55:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:01.277 02:55:56 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:01.277 02:55:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:01.277 02:55:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:01.277 02:55:56 -- common/autotest_common.sh@1187 -- # return 0 00:08:01.277 02:55:56 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:01.277 02:55:56 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:01.277 02:55:56 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:01.277 02:55:56 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:01.277 02:55:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:01.277 02:55:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:01.277 02:55:56 -- setup/common.sh@80 -- # echo 536870912 00:08:01.277 02:55:56 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:01.277 02:55:56 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:01.277 02:55:56 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:01.277 02:55:56 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:01.534 02:55:56 -- target/filesystem.sh@69 -- # partprobe 00:08:02.099 02:55:57 -- target/filesystem.sh@70 -- # sleep 1 00:08:03.472 02:55:58 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:03.472 02:55:58 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:03.472 02:55:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:03.472 02:55:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:03.472 02:55:58 -- common/autotest_common.sh@10 -- # set +x 00:08:03.472 ************************************ 00:08:03.472 START TEST filesystem_ext4 00:08:03.472 ************************************ 00:08:03.472 02:55:58 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:03.472 02:55:58 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:03.472 02:55:58 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:03.472 02:55:58 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:03.472 02:55:58 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:03.472 02:55:58 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:03.472 02:55:58 -- common/autotest_common.sh@904 -- # local i=0 00:08:03.472 02:55:58 -- common/autotest_common.sh@905 -- # local force 00:08:03.472 02:55:58 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:03.472 02:55:58 -- common/autotest_common.sh@908 -- # force=-F 00:08:03.472 02:55:58 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:03.472 mke2fs 1.46.5 (30-Dec-2021) 00:08:03.472 Discarding device blocks: 0/522240 done 00:08:03.472 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:03.472 Filesystem UUID: b50a5e54-d780-4245-99cf-de946e10fe65 00:08:03.472 Superblock backups stored on blocks: 00:08:03.472 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:03.472 00:08:03.472 Allocating group tables: 0/64 done 00:08:03.472 Writing inode tables: 0/64 done 00:08:03.472 Creating journal (8192 blocks): done 00:08:04.553 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:08:04.553 00:08:04.553 02:55:59 -- common/autotest_common.sh@921 -- # return 0 00:08:04.553 02:55:59 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:04.811 02:56:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:05.069 02:56:00 -- target/filesystem.sh@25 -- # sync 00:08:05.069 02:56:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:05.069 02:56:00 -- target/filesystem.sh@27 -- # sync 00:08:05.069 02:56:00 -- target/filesystem.sh@29 -- # i=0 00:08:05.069 02:56:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:05.069 02:56:00 -- target/filesystem.sh@37 -- # kill -0 1902461 00:08:05.069 02:56:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:05.069 02:56:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:05.069 02:56:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:05.069 02:56:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:05.069 00:08:05.069 real 0m1.815s 00:08:05.069 user 0m0.018s 00:08:05.069 sys 0m0.058s 00:08:05.069 02:56:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.069 02:56:00 -- common/autotest_common.sh@10 -- # set +x 00:08:05.069 ************************************ 00:08:05.069 END TEST filesystem_ext4 00:08:05.069 ************************************ 00:08:05.069 02:56:00 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:05.069 02:56:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:05.069 02:56:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.069 02:56:00 -- common/autotest_common.sh@10 -- # set +x 00:08:05.069 ************************************ 00:08:05.069 START TEST filesystem_btrfs 00:08:05.069 ************************************ 00:08:05.069 02:56:00 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:05.069 02:56:00 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:05.069 02:56:00 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.069 02:56:00 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:05.069 02:56:00 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:05.069 02:56:00 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:05.069 02:56:00 -- common/autotest_common.sh@904 -- # local i=0 00:08:05.069 02:56:00 -- common/autotest_common.sh@905 -- # local force 00:08:05.069 02:56:00 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:05.069 02:56:00 -- common/autotest_common.sh@910 -- # force=-f 00:08:05.069 02:56:00 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:05.327 btrfs-progs v6.6.2 00:08:05.327 See https://btrfs.readthedocs.io for more information. 00:08:05.327 00:08:05.327 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:05.327 NOTE: several default settings have changed in version 5.15, please make sure 00:08:05.327 this does not affect your deployments: 00:08:05.327 - DUP for metadata (-m dup) 00:08:05.327 - enabled no-holes (-O no-holes) 00:08:05.327 - enabled free-space-tree (-R free-space-tree) 00:08:05.327 00:08:05.327 Label: (null) 00:08:05.327 UUID: e69fce3d-a073-48d6-8ba6-861e5b46c38c 00:08:05.327 Node size: 16384 00:08:05.327 Sector size: 4096 00:08:05.327 Filesystem size: 510.00MiB 00:08:05.327 Block group profiles: 00:08:05.327 Data: single 8.00MiB 00:08:05.327 Metadata: DUP 32.00MiB 00:08:05.327 System: DUP 8.00MiB 00:08:05.327 SSD detected: yes 00:08:05.327 Zoned device: no 00:08:05.327 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:05.327 Runtime features: free-space-tree 00:08:05.327 Checksum: crc32c 00:08:05.327 Number of devices: 1 00:08:05.327 Devices: 00:08:05.327 ID SIZE PATH 00:08:05.327 1 510.00MiB /dev/nvme0n1p1 00:08:05.327 00:08:05.327 02:56:00 -- common/autotest_common.sh@921 -- # return 0 00:08:05.327 02:56:00 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:06.261 02:56:01 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:06.261 02:56:01 -- target/filesystem.sh@25 -- # sync 00:08:06.261 02:56:01 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:06.261 02:56:01 -- target/filesystem.sh@27 -- # sync 00:08:06.261 02:56:01 -- target/filesystem.sh@29 -- # i=0 00:08:06.261 02:56:01 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:06.261 02:56:01 -- target/filesystem.sh@37 -- # kill -0 1902461 00:08:06.261 02:56:01 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:06.261 02:56:01 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:06.261 02:56:01 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:06.261 02:56:01 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:06.519 00:08:06.519 real 0m1.348s 00:08:06.519 user 0m0.032s 00:08:06.519 sys 0m0.111s 00:08:06.519 02:56:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.519 02:56:01 -- common/autotest_common.sh@10 -- # set +x 00:08:06.519 ************************************ 00:08:06.519 END TEST filesystem_btrfs 00:08:06.519 ************************************ 00:08:06.519 02:56:01 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:06.519 02:56:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:06.519 02:56:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.519 02:56:01 -- common/autotest_common.sh@10 -- # set +x 00:08:06.519 ************************************ 00:08:06.519 START TEST filesystem_xfs 00:08:06.519 ************************************ 00:08:06.519 02:56:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:06.519 02:56:01 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:06.519 02:56:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:06.519 02:56:01 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:06.519 02:56:01 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:06.519 02:56:01 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:06.519 02:56:01 -- common/autotest_common.sh@904 -- # local i=0 00:08:06.519 02:56:01 -- common/autotest_common.sh@905 -- # local force 00:08:06.519 02:56:01 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:06.519 02:56:01 -- common/autotest_common.sh@910 -- # force=-f 00:08:06.519 02:56:01 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:06.519 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:06.519 = sectsz=512 attr=2, projid32bit=1 00:08:06.519 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:06.519 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:06.519 data = bsize=4096 blocks=130560, imaxpct=25 00:08:06.519 = sunit=0 swidth=0 blks 00:08:06.519 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:06.519 log =internal log bsize=4096 blocks=16384, version=2 00:08:06.519 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:06.519 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:07.454 Discarding blocks...Done. 00:08:07.454 02:56:02 -- common/autotest_common.sh@921 -- # return 0 00:08:07.454 02:56:02 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.350 02:56:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.350 02:56:04 -- target/filesystem.sh@25 -- # sync 00:08:09.350 02:56:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.350 02:56:04 -- target/filesystem.sh@27 -- # sync 00:08:09.350 02:56:04 -- target/filesystem.sh@29 -- # i=0 00:08:09.350 02:56:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.350 02:56:04 -- target/filesystem.sh@37 -- # kill -0 1902461 00:08:09.350 02:56:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.350 02:56:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.350 02:56:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.350 02:56:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.350 00:08:09.350 real 0m3.011s 00:08:09.350 user 0m0.017s 00:08:09.350 sys 0m0.058s 00:08:09.350 02:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.350 02:56:04 -- common/autotest_common.sh@10 -- # set +x 00:08:09.350 ************************************ 00:08:09.350 END TEST filesystem_xfs 00:08:09.350 ************************************ 00:08:09.350 02:56:04 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:09.608 02:56:04 -- target/filesystem.sh@93 -- # sync 00:08:09.608 02:56:04 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:09.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.608 02:56:04 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:09.608 02:56:04 -- common/autotest_common.sh@1198 -- # local i=0 00:08:09.608 02:56:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:09.608 02:56:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.608 02:56:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:09.608 02:56:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:09.608 02:56:04 -- common/autotest_common.sh@1210 -- # return 0 00:08:09.608 02:56:04 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.608 02:56:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:09.608 02:56:04 -- common/autotest_common.sh@10 -- # set +x 00:08:09.608 02:56:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:09.608 02:56:04 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:09.608 02:56:04 -- target/filesystem.sh@101 -- # killprocess 1902461 00:08:09.608 02:56:04 -- common/autotest_common.sh@926 -- # '[' -z 1902461 ']' 00:08:09.608 02:56:04 -- common/autotest_common.sh@930 -- # kill -0 1902461 00:08:09.608 02:56:04 -- common/autotest_common.sh@931 -- # uname 00:08:09.608 02:56:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:09.608 02:56:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1902461 00:08:09.608 02:56:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:09.608 02:56:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:09.608 02:56:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1902461' 00:08:09.608 killing process with pid 1902461 00:08:09.608 02:56:04 -- common/autotest_common.sh@945 -- # kill 1902461 00:08:09.608 02:56:04 -- common/autotest_common.sh@950 -- # wait 1902461 00:08:10.174 02:56:05 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:10.174 00:08:10.174 real 0m12.819s 00:08:10.174 user 0m49.260s 00:08:10.174 sys 0m1.921s 00:08:10.174 02:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.174 02:56:05 -- common/autotest_common.sh@10 -- # set +x 00:08:10.174 ************************************ 00:08:10.174 END TEST nvmf_filesystem_no_in_capsule 00:08:10.174 ************************************ 00:08:10.174 02:56:05 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:10.174 02:56:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:10.174 02:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.174 02:56:05 -- common/autotest_common.sh@10 -- # set +x 00:08:10.174 ************************************ 00:08:10.174 START TEST nvmf_filesystem_in_capsule 00:08:10.174 ************************************ 00:08:10.174 02:56:05 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:10.174 02:56:05 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:10.174 02:56:05 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:10.174 02:56:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:10.174 02:56:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:10.174 02:56:05 -- common/autotest_common.sh@10 -- # set +x 00:08:10.174 02:56:05 -- nvmf/common.sh@469 -- # nvmfpid=1904198 00:08:10.174 02:56:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:10.174 02:56:05 -- nvmf/common.sh@470 -- # waitforlisten 1904198 00:08:10.174 02:56:05 -- common/autotest_common.sh@819 -- # '[' -z 1904198 ']' 00:08:10.174 02:56:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.174 02:56:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:10.174 02:56:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.174 02:56:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:10.174 02:56:05 -- common/autotest_common.sh@10 -- # set +x 00:08:10.174 [2024-07-14 02:56:05.300059] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:10.174 [2024-07-14 02:56:05.300140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.174 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.174 [2024-07-14 02:56:05.370146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:10.433 [2024-07-14 02:56:05.465221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:10.433 [2024-07-14 02:56:05.465383] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:10.433 [2024-07-14 02:56:05.465402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:10.433 [2024-07-14 02:56:05.465417] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:10.433 [2024-07-14 02:56:05.465474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.433 [2024-07-14 02:56:05.465529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.433 [2024-07-14 02:56:05.465593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.433 [2024-07-14 02:56:05.465590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.037 02:56:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.037 02:56:06 -- common/autotest_common.sh@852 -- # return 0 00:08:11.037 02:56:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.037 02:56:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.037 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 02:56:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.037 02:56:06 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:11.037 02:56:06 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:11.037 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.037 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 [2024-07-14 02:56:06.285521] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.295 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.295 Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:11.295 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.295 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.295 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.295 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.295 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.295 [2024-07-14 02:56:06.469740] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:11.295 02:56:06 -- common/autotest_common.sh@1359 -- # local bs 00:08:11.295 02:56:06 -- common/autotest_common.sh@1360 -- # local nb 00:08:11.295 02:56:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:11.295 02:56:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.295 02:56:06 -- common/autotest_common.sh@10 -- # set +x 00:08:11.295 02:56:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.295 02:56:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:11.295 { 00:08:11.295 "name": "Malloc1", 00:08:11.295 "aliases": [ 00:08:11.295 "bf047196-d1b3-4067-bda5-4840380034be" 00:08:11.295 ], 00:08:11.295 "product_name": "Malloc disk", 00:08:11.295 "block_size": 512, 00:08:11.295 "num_blocks": 1048576, 00:08:11.295 "uuid": "bf047196-d1b3-4067-bda5-4840380034be", 00:08:11.295 "assigned_rate_limits": { 00:08:11.295 "rw_ios_per_sec": 0, 00:08:11.295 "rw_mbytes_per_sec": 0, 00:08:11.295 "r_mbytes_per_sec": 0, 00:08:11.295 "w_mbytes_per_sec": 0 00:08:11.295 }, 00:08:11.295 "claimed": true, 00:08:11.295 "claim_type": "exclusive_write", 00:08:11.295 "zoned": false, 00:08:11.295 "supported_io_types": { 00:08:11.295 "read": true, 00:08:11.295 "write": true, 00:08:11.295 "unmap": true, 00:08:11.295 "write_zeroes": true, 00:08:11.295 "flush": true, 00:08:11.295 "reset": true, 00:08:11.295 "compare": false, 00:08:11.295 "compare_and_write": false, 00:08:11.295 "abort": true, 00:08:11.295 "nvme_admin": false, 00:08:11.295 "nvme_io": false 00:08:11.295 }, 00:08:11.295 "memory_domains": [ 00:08:11.295 { 00:08:11.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.295 "dma_device_type": 2 00:08:11.295 } 00:08:11.295 ], 00:08:11.295 "driver_specific": {} 00:08:11.295 } 00:08:11.295 ]' 00:08:11.295 02:56:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:11.295 02:56:06 -- common/autotest_common.sh@1362 -- # bs=512 00:08:11.295 02:56:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:11.553 02:56:06 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:11.553 02:56:06 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:11.553 02:56:06 -- common/autotest_common.sh@1367 -- # echo 512 00:08:11.553 02:56:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:11.553 02:56:06 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.119 02:56:07 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.119 02:56:07 -- common/autotest_common.sh@1177 -- # local i=0 00:08:12.119 02:56:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.119 02:56:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:12.119 02:56:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:14.017 02:56:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:14.017 02:56:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:14.017 02:56:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.017 02:56:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:14.017 02:56:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.017 02:56:09 -- common/autotest_common.sh@1187 -- # return 0 00:08:14.017 02:56:09 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.017 02:56:09 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.017 02:56:09 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.017 02:56:09 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.017 02:56:09 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.017 02:56:09 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.017 02:56:09 -- setup/common.sh@80 -- # echo 536870912 00:08:14.017 02:56:09 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.017 02:56:09 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.017 02:56:09 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.017 02:56:09 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.276 02:56:09 -- target/filesystem.sh@69 -- # partprobe 00:08:15.211 02:56:10 -- target/filesystem.sh@70 -- # sleep 1 00:08:16.143 02:56:11 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:16.143 02:56:11 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:16.143 02:56:11 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:16.143 02:56:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.143 02:56:11 -- common/autotest_common.sh@10 -- # set +x 00:08:16.143 ************************************ 00:08:16.143 START TEST filesystem_in_capsule_ext4 00:08:16.144 ************************************ 00:08:16.144 02:56:11 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:16.144 02:56:11 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:16.144 02:56:11 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.144 02:56:11 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:16.144 02:56:11 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:16.144 02:56:11 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:16.144 02:56:11 -- common/autotest_common.sh@904 -- # local i=0 00:08:16.144 02:56:11 -- common/autotest_common.sh@905 -- # local force 00:08:16.144 02:56:11 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:16.144 02:56:11 -- common/autotest_common.sh@908 -- # force=-F 00:08:16.144 02:56:11 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:16.144 mke2fs 1.46.5 (30-Dec-2021) 00:08:16.401 Discarding device blocks: 0/522240 done 00:08:16.401 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:16.401 Filesystem UUID: 61781944-e280-4c8a-a4fd-c859767c0457 00:08:16.402 Superblock backups stored on blocks: 00:08:16.402 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:16.402 00:08:16.402 Allocating group tables: 0/64 done 00:08:16.402 Writing inode tables: 0/64 done 00:08:16.967 Creating journal (8192 blocks): done 00:08:16.967 Writing superblocks and filesystem accounting information: 0/64 done 00:08:16.967 00:08:16.967 02:56:12 -- common/autotest_common.sh@921 -- # return 0 00:08:16.967 02:56:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.967 02:56:12 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.967 02:56:12 -- target/filesystem.sh@25 -- # sync 00:08:16.967 02:56:12 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.967 02:56:12 -- target/filesystem.sh@27 -- # sync 00:08:16.967 02:56:12 -- target/filesystem.sh@29 -- # i=0 00:08:16.967 02:56:12 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:17.227 02:56:12 -- target/filesystem.sh@37 -- # kill -0 1904198 00:08:17.227 02:56:12 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:17.227 02:56:12 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:17.227 02:56:12 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:17.227 02:56:12 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:17.227 00:08:17.227 real 0m0.977s 00:08:17.227 user 0m0.021s 00:08:17.227 sys 0m0.048s 00:08:17.227 02:56:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.227 02:56:12 -- common/autotest_common.sh@10 -- # set +x 00:08:17.227 ************************************ 00:08:17.227 END TEST filesystem_in_capsule_ext4 00:08:17.227 ************************************ 00:08:17.227 02:56:12 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:17.227 02:56:12 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:17.227 02:56:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.227 02:56:12 -- common/autotest_common.sh@10 -- # set +x 00:08:17.227 ************************************ 00:08:17.227 START TEST filesystem_in_capsule_btrfs 00:08:17.227 ************************************ 00:08:17.227 02:56:12 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:17.227 02:56:12 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:17.227 02:56:12 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:17.227 02:56:12 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:17.227 02:56:12 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:17.227 02:56:12 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:17.227 02:56:12 -- common/autotest_common.sh@904 -- # local i=0 00:08:17.227 02:56:12 -- common/autotest_common.sh@905 -- # local force 00:08:17.227 02:56:12 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:17.227 02:56:12 -- common/autotest_common.sh@910 -- # force=-f 00:08:17.227 02:56:12 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:17.485 btrfs-progs v6.6.2 00:08:17.485 See https://btrfs.readthedocs.io for more information. 00:08:17.485 00:08:17.485 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:17.485 NOTE: several default settings have changed in version 5.15, please make sure 00:08:17.485 this does not affect your deployments: 00:08:17.485 - DUP for metadata (-m dup) 00:08:17.485 - enabled no-holes (-O no-holes) 00:08:17.485 - enabled free-space-tree (-R free-space-tree) 00:08:17.485 00:08:17.485 Label: (null) 00:08:17.485 UUID: 05fddefe-f3b1-48c9-82f7-22aa9e1dca1d 00:08:17.485 Node size: 16384 00:08:17.485 Sector size: 4096 00:08:17.485 Filesystem size: 510.00MiB 00:08:17.485 Block group profiles: 00:08:17.485 Data: single 8.00MiB 00:08:17.485 Metadata: DUP 32.00MiB 00:08:17.485 System: DUP 8.00MiB 00:08:17.485 SSD detected: yes 00:08:17.485 Zoned device: no 00:08:17.485 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:17.485 Runtime features: free-space-tree 00:08:17.485 Checksum: crc32c 00:08:17.485 Number of devices: 1 00:08:17.485 Devices: 00:08:17.485 ID SIZE PATH 00:08:17.485 1 510.00MiB /dev/nvme0n1p1 00:08:17.485 00:08:17.485 02:56:12 -- common/autotest_common.sh@921 -- # return 0 00:08:17.485 02:56:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:18.052 02:56:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:18.052 02:56:13 -- target/filesystem.sh@25 -- # sync 00:08:18.052 02:56:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:18.052 02:56:13 -- target/filesystem.sh@27 -- # sync 00:08:18.052 02:56:13 -- target/filesystem.sh@29 -- # i=0 00:08:18.052 02:56:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:18.052 02:56:13 -- target/filesystem.sh@37 -- # kill -0 1904198 00:08:18.052 02:56:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:18.052 02:56:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:18.052 02:56:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:18.052 02:56:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:18.052 00:08:18.052 real 0m0.889s 00:08:18.052 user 0m0.015s 00:08:18.052 sys 0m0.116s 00:08:18.052 02:56:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:18.052 02:56:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.052 ************************************ 00:08:18.052 END TEST filesystem_in_capsule_btrfs 00:08:18.052 ************************************ 00:08:18.052 02:56:13 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:18.052 02:56:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:18.052 02:56:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:18.052 02:56:13 -- common/autotest_common.sh@10 -- # set +x 00:08:18.052 ************************************ 00:08:18.052 START TEST filesystem_in_capsule_xfs 00:08:18.052 ************************************ 00:08:18.052 02:56:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:18.052 02:56:13 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:18.052 02:56:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:18.052 02:56:13 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:18.052 02:56:13 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:18.052 02:56:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:18.052 02:56:13 -- common/autotest_common.sh@904 -- # local i=0 00:08:18.052 02:56:13 -- common/autotest_common.sh@905 -- # local force 00:08:18.052 02:56:13 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:18.052 02:56:13 -- common/autotest_common.sh@910 -- # force=-f 00:08:18.052 02:56:13 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:18.052 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:18.052 = sectsz=512 attr=2, projid32bit=1 00:08:18.052 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:18.052 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:18.052 data = bsize=4096 blocks=130560, imaxpct=25 00:08:18.052 = sunit=0 swidth=0 blks 00:08:18.052 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:18.052 log =internal log bsize=4096 blocks=16384, version=2 00:08:18.052 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:18.052 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:19.419 Discarding blocks...Done. 00:08:19.419 02:56:14 -- common/autotest_common.sh@921 -- # return 0 00:08:19.419 02:56:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:21.944 02:56:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:21.944 02:56:16 -- target/filesystem.sh@25 -- # sync 00:08:21.944 02:56:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:21.944 02:56:16 -- target/filesystem.sh@27 -- # sync 00:08:21.944 02:56:16 -- target/filesystem.sh@29 -- # i=0 00:08:21.944 02:56:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:21.944 02:56:16 -- target/filesystem.sh@37 -- # kill -0 1904198 00:08:21.944 02:56:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:21.944 02:56:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:21.944 02:56:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:21.944 02:56:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:21.944 00:08:21.944 real 0m3.439s 00:08:21.944 user 0m0.015s 00:08:21.944 sys 0m0.055s 00:08:21.944 02:56:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.944 02:56:16 -- common/autotest_common.sh@10 -- # set +x 00:08:21.944 ************************************ 00:08:21.944 END TEST filesystem_in_capsule_xfs 00:08:21.944 ************************************ 00:08:21.944 02:56:16 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:21.944 02:56:16 -- target/filesystem.sh@93 -- # sync 00:08:21.944 02:56:16 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:21.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:21.944 02:56:17 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:21.944 02:56:17 -- common/autotest_common.sh@1198 -- # local i=0 00:08:21.944 02:56:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:21.944 02:56:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.944 02:56:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:21.944 02:56:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:21.944 02:56:17 -- common/autotest_common.sh@1210 -- # return 0 00:08:21.944 02:56:17 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:21.944 02:56:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.944 02:56:17 -- common/autotest_common.sh@10 -- # set +x 00:08:21.944 02:56:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.944 02:56:17 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:21.944 02:56:17 -- target/filesystem.sh@101 -- # killprocess 1904198 00:08:21.944 02:56:17 -- common/autotest_common.sh@926 -- # '[' -z 1904198 ']' 00:08:21.944 02:56:17 -- common/autotest_common.sh@930 -- # kill -0 1904198 00:08:21.944 02:56:17 -- common/autotest_common.sh@931 -- # uname 00:08:21.944 02:56:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:21.944 02:56:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1904198 00:08:21.944 02:56:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:21.944 02:56:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:21.944 02:56:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1904198' 00:08:21.944 killing process with pid 1904198 00:08:21.944 02:56:17 -- common/autotest_common.sh@945 -- # kill 1904198 00:08:21.944 02:56:17 -- common/autotest_common.sh@950 -- # wait 1904198 00:08:22.508 02:56:17 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:22.509 00:08:22.509 real 0m12.301s 00:08:22.509 user 0m47.448s 00:08:22.509 sys 0m1.784s 00:08:22.509 02:56:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.509 02:56:17 -- common/autotest_common.sh@10 -- # set +x 00:08:22.509 ************************************ 00:08:22.509 END TEST nvmf_filesystem_in_capsule 00:08:22.509 ************************************ 00:08:22.509 02:56:17 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:22.509 02:56:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:22.509 02:56:17 -- nvmf/common.sh@116 -- # sync 00:08:22.509 02:56:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:22.509 02:56:17 -- nvmf/common.sh@119 -- # set +e 00:08:22.509 02:56:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:22.509 02:56:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:22.509 rmmod nvme_tcp 00:08:22.509 rmmod nvme_fabrics 00:08:22.509 rmmod nvme_keyring 00:08:22.509 02:56:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:22.509 02:56:17 -- nvmf/common.sh@123 -- # set -e 00:08:22.509 02:56:17 -- nvmf/common.sh@124 -- # return 0 00:08:22.509 02:56:17 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:22.509 02:56:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:22.509 02:56:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:22.509 02:56:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:22.509 02:56:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:22.509 02:56:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:22.509 02:56:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.509 02:56:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.509 02:56:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.042 02:56:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:25.042 00:08:25.042 real 0m29.638s 00:08:25.042 user 1m37.664s 00:08:25.042 sys 0m5.269s 00:08:25.042 02:56:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.042 02:56:19 -- common/autotest_common.sh@10 -- # set +x 00:08:25.042 ************************************ 00:08:25.042 END TEST nvmf_filesystem 00:08:25.042 ************************************ 00:08:25.042 02:56:19 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.042 02:56:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:25.042 02:56:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.042 02:56:19 -- common/autotest_common.sh@10 -- # set +x 00:08:25.042 ************************************ 00:08:25.042 START TEST nvmf_discovery 00:08:25.042 ************************************ 00:08:25.042 02:56:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:25.042 * Looking for test storage... 00:08:25.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.042 02:56:19 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.042 02:56:19 -- nvmf/common.sh@7 -- # uname -s 00:08:25.042 02:56:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.042 02:56:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.042 02:56:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.042 02:56:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.042 02:56:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.042 02:56:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.042 02:56:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.042 02:56:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.042 02:56:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.042 02:56:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.042 02:56:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.042 02:56:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:25.042 02:56:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.042 02:56:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.042 02:56:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.042 02:56:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.042 02:56:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.042 02:56:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.042 02:56:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.042 02:56:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.042 02:56:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.042 02:56:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.042 02:56:19 -- paths/export.sh@5 -- # export PATH 00:08:25.042 02:56:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.042 02:56:19 -- nvmf/common.sh@46 -- # : 0 00:08:25.042 02:56:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:25.042 02:56:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:25.042 02:56:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:25.042 02:56:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.042 02:56:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.042 02:56:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:25.042 02:56:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:25.042 02:56:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:25.042 02:56:19 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:25.042 02:56:19 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:25.042 02:56:19 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:25.042 02:56:19 -- target/discovery.sh@15 -- # hash nvme 00:08:25.042 02:56:19 -- target/discovery.sh@20 -- # nvmftestinit 00:08:25.042 02:56:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:25.042 02:56:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.042 02:56:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:25.042 02:56:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:25.042 02:56:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:25.042 02:56:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.042 02:56:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:25.042 02:56:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.042 02:56:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:25.042 02:56:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:25.042 02:56:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:25.042 02:56:19 -- common/autotest_common.sh@10 -- # set +x 00:08:26.417 02:56:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:26.417 02:56:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:26.417 02:56:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:26.417 02:56:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:26.417 02:56:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:26.417 02:56:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:26.417 02:56:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:26.417 02:56:21 -- nvmf/common.sh@294 -- # net_devs=() 00:08:26.417 02:56:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:26.417 02:56:21 -- nvmf/common.sh@295 -- # e810=() 00:08:26.417 02:56:21 -- nvmf/common.sh@295 -- # local -ga e810 00:08:26.417 02:56:21 -- nvmf/common.sh@296 -- # x722=() 00:08:26.417 02:56:21 -- nvmf/common.sh@296 -- # local -ga x722 00:08:26.418 02:56:21 -- nvmf/common.sh@297 -- # mlx=() 00:08:26.418 02:56:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:26.418 02:56:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.418 02:56:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:26.418 02:56:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:26.418 02:56:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.418 02:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:26.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:26.418 02:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:26.418 02:56:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:26.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:26.418 02:56:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.418 02:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.418 02:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.418 02:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:26.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:26.418 02:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.418 02:56:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:26.418 02:56:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.418 02:56:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.418 02:56:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:26.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:26.418 02:56:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.418 02:56:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:26.418 02:56:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:26.418 02:56:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:26.418 02:56:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.418 02:56:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:26.418 02:56:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:26.418 02:56:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:26.418 02:56:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:26.418 02:56:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:26.418 02:56:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:26.418 02:56:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:26.418 02:56:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.418 02:56:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:26.418 02:56:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:26.418 02:56:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:26.418 02:56:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:26.677 02:56:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:26.677 02:56:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:26.677 02:56:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:26.677 02:56:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:26.677 02:56:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:26.677 02:56:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:26.677 02:56:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:26.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:26.677 00:08:26.677 --- 10.0.0.2 ping statistics --- 00:08:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.677 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:26.677 02:56:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:26.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:08:26.677 00:08:26.677 --- 10.0.0.1 ping statistics --- 00:08:26.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.677 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:08:26.677 02:56:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.677 02:56:21 -- nvmf/common.sh@410 -- # return 0 00:08:26.677 02:56:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:26.677 02:56:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.677 02:56:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:26.677 02:56:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:26.677 02:56:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.677 02:56:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:26.677 02:56:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:26.677 02:56:21 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:26.677 02:56:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:26.677 02:56:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:26.677 02:56:21 -- common/autotest_common.sh@10 -- # set +x 00:08:26.677 02:56:21 -- nvmf/common.sh@469 -- # nvmfpid=1907847 00:08:26.677 02:56:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:26.677 02:56:21 -- nvmf/common.sh@470 -- # waitforlisten 1907847 00:08:26.677 02:56:21 -- common/autotest_common.sh@819 -- # '[' -z 1907847 ']' 00:08:26.677 02:56:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.677 02:56:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:26.677 02:56:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.677 02:56:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:26.677 02:56:21 -- common/autotest_common.sh@10 -- # set +x 00:08:26.677 [2024-07-14 02:56:21.846523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:26.677 [2024-07-14 02:56:21.846610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.677 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.677 [2024-07-14 02:56:21.916008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.935 [2024-07-14 02:56:22.008881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.935 [2024-07-14 02:56:22.009053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.935 [2024-07-14 02:56:22.009072] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.935 [2024-07-14 02:56:22.009086] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.935 [2024-07-14 02:56:22.009176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.935 [2024-07-14 02:56:22.009233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.935 [2024-07-14 02:56:22.009288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.935 [2024-07-14 02:56:22.009291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.563 02:56:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:27.563 02:56:22 -- common/autotest_common.sh@852 -- # return 0 00:08:27.563 02:56:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:27.563 02:56:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:27.563 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.563 02:56:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.563 02:56:22 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.563 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.563 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.563 [2024-07-14 02:56:22.795380] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.563 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.563 02:56:22 -- target/discovery.sh@26 -- # seq 1 4 00:08:27.563 02:56:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:27.563 02:56:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:27.563 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.563 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.563 Null1 00:08:27.563 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.563 02:56:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.563 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.563 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 [2024-07-14 02:56:22.835646] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:27.821 02:56:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 Null2 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:27.821 02:56:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.821 Null3 00:08:27.821 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.821 02:56:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:27.821 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.821 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:27.822 02:56:22 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 Null4 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:27.822 02:56:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:27.822 02:56:22 -- common/autotest_common.sh@10 -- # set +x 00:08:27.822 02:56:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:27.822 02:56:22 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:28.080 00:08:28.080 Discovery Log Number of Records 6, Generation counter 6 00:08:28.080 =====Discovery Log Entry 0====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: current discovery subsystem 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4420 00:08:28.080 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: explicit discovery connections, duplicate discovery information 00:08:28.080 sectype: none 00:08:28.080 =====Discovery Log Entry 1====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: nvme subsystem 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4420 00:08:28.080 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: none 00:08:28.080 sectype: none 00:08:28.080 =====Discovery Log Entry 2====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: nvme subsystem 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4420 00:08:28.080 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: none 00:08:28.080 sectype: none 00:08:28.080 =====Discovery Log Entry 3====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: nvme subsystem 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4420 00:08:28.080 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: none 00:08:28.080 sectype: none 00:08:28.080 =====Discovery Log Entry 4====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: nvme subsystem 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4420 00:08:28.080 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: none 00:08:28.080 sectype: none 00:08:28.080 =====Discovery Log Entry 5====== 00:08:28.080 trtype: tcp 00:08:28.080 adrfam: ipv4 00:08:28.080 subtype: discovery subsystem referral 00:08:28.080 treq: not required 00:08:28.080 portid: 0 00:08:28.080 trsvcid: 4430 00:08:28.080 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:28.080 traddr: 10.0.0.2 00:08:28.080 eflags: none 00:08:28.080 sectype: none 00:08:28.080 02:56:23 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:28.080 Perform nvmf subsystem discovery via RPC 00:08:28.080 02:56:23 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:28.080 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.080 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.080 [2024-07-14 02:56:23.132522] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:28.080 [ 00:08:28.080 { 00:08:28.080 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:28.080 "subtype": "Discovery", 00:08:28.080 "listen_addresses": [ 00:08:28.080 { 00:08:28.080 "transport": "TCP", 00:08:28.080 "trtype": "TCP", 00:08:28.080 "adrfam": "IPv4", 00:08:28.080 "traddr": "10.0.0.2", 00:08:28.080 "trsvcid": "4420" 00:08:28.080 } 00:08:28.080 ], 00:08:28.080 "allow_any_host": true, 00:08:28.080 "hosts": [] 00:08:28.080 }, 00:08:28.080 { 00:08:28.080 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:28.080 "subtype": "NVMe", 00:08:28.080 "listen_addresses": [ 00:08:28.080 { 00:08:28.080 "transport": "TCP", 00:08:28.080 "trtype": "TCP", 00:08:28.080 "adrfam": "IPv4", 00:08:28.080 "traddr": "10.0.0.2", 00:08:28.080 "trsvcid": "4420" 00:08:28.080 } 00:08:28.080 ], 00:08:28.080 "allow_any_host": true, 00:08:28.080 "hosts": [], 00:08:28.080 "serial_number": "SPDK00000000000001", 00:08:28.080 "model_number": "SPDK bdev Controller", 00:08:28.080 "max_namespaces": 32, 00:08:28.080 "min_cntlid": 1, 00:08:28.080 "max_cntlid": 65519, 00:08:28.080 "namespaces": [ 00:08:28.080 { 00:08:28.080 "nsid": 1, 00:08:28.080 "bdev_name": "Null1", 00:08:28.080 "name": "Null1", 00:08:28.080 "nguid": "3799F487E3D84DE78F10D54815D48B38", 00:08:28.080 "uuid": "3799f487-e3d8-4de7-8f10-d54815d48b38" 00:08:28.080 } 00:08:28.080 ] 00:08:28.080 }, 00:08:28.080 { 00:08:28.080 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:28.080 "subtype": "NVMe", 00:08:28.080 "listen_addresses": [ 00:08:28.080 { 00:08:28.080 "transport": "TCP", 00:08:28.080 "trtype": "TCP", 00:08:28.080 "adrfam": "IPv4", 00:08:28.080 "traddr": "10.0.0.2", 00:08:28.080 "trsvcid": "4420" 00:08:28.080 } 00:08:28.080 ], 00:08:28.080 "allow_any_host": true, 00:08:28.080 "hosts": [], 00:08:28.080 "serial_number": "SPDK00000000000002", 00:08:28.080 "model_number": "SPDK bdev Controller", 00:08:28.080 "max_namespaces": 32, 00:08:28.080 "min_cntlid": 1, 00:08:28.080 "max_cntlid": 65519, 00:08:28.080 "namespaces": [ 00:08:28.080 { 00:08:28.080 "nsid": 1, 00:08:28.080 "bdev_name": "Null2", 00:08:28.080 "name": "Null2", 00:08:28.080 "nguid": "0CB92CD21BDB4A6388E46BC7FFED39A6", 00:08:28.080 "uuid": "0cb92cd2-1bdb-4a63-88e4-6bc7ffed39a6" 00:08:28.080 } 00:08:28.080 ] 00:08:28.080 }, 00:08:28.080 { 00:08:28.080 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:28.080 "subtype": "NVMe", 00:08:28.080 "listen_addresses": [ 00:08:28.080 { 00:08:28.080 "transport": "TCP", 00:08:28.081 "trtype": "TCP", 00:08:28.081 "adrfam": "IPv4", 00:08:28.081 "traddr": "10.0.0.2", 00:08:28.081 "trsvcid": "4420" 00:08:28.081 } 00:08:28.081 ], 00:08:28.081 "allow_any_host": true, 00:08:28.081 "hosts": [], 00:08:28.081 "serial_number": "SPDK00000000000003", 00:08:28.081 "model_number": "SPDK bdev Controller", 00:08:28.081 "max_namespaces": 32, 00:08:28.081 "min_cntlid": 1, 00:08:28.081 "max_cntlid": 65519, 00:08:28.081 "namespaces": [ 00:08:28.081 { 00:08:28.081 "nsid": 1, 00:08:28.081 "bdev_name": "Null3", 00:08:28.081 "name": "Null3", 00:08:28.081 "nguid": "8D7EE3C16FBA4F55A1177B845A7D3085", 00:08:28.081 "uuid": "8d7ee3c1-6fba-4f55-a117-7b845a7d3085" 00:08:28.081 } 00:08:28.081 ] 00:08:28.081 }, 00:08:28.081 { 00:08:28.081 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:28.081 "subtype": "NVMe", 00:08:28.081 "listen_addresses": [ 00:08:28.081 { 00:08:28.081 "transport": "TCP", 00:08:28.081 "trtype": "TCP", 00:08:28.081 "adrfam": "IPv4", 00:08:28.081 "traddr": "10.0.0.2", 00:08:28.081 "trsvcid": "4420" 00:08:28.081 } 00:08:28.081 ], 00:08:28.081 "allow_any_host": true, 00:08:28.081 "hosts": [], 00:08:28.081 "serial_number": "SPDK00000000000004", 00:08:28.081 "model_number": "SPDK bdev Controller", 00:08:28.081 "max_namespaces": 32, 00:08:28.081 "min_cntlid": 1, 00:08:28.081 "max_cntlid": 65519, 00:08:28.081 "namespaces": [ 00:08:28.081 { 00:08:28.081 "nsid": 1, 00:08:28.081 "bdev_name": "Null4", 00:08:28.081 "name": "Null4", 00:08:28.081 "nguid": "C9A5D33B4E5D4889A45E902B79F758FB", 00:08:28.081 "uuid": "c9a5d33b-4e5d-4889-a45e-902b79f758fb" 00:08:28.081 } 00:08:28.081 ] 00:08:28.081 } 00:08:28.081 ] 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@42 -- # seq 1 4 00:08:28.081 02:56:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.081 02:56:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.081 02:56:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.081 02:56:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:28.081 02:56:23 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:28.081 02:56:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.081 02:56:23 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:28.081 02:56:23 -- common/autotest_common.sh@10 -- # set +x 00:08:28.081 02:56:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.081 02:56:23 -- target/discovery.sh@49 -- # check_bdevs= 00:08:28.081 02:56:23 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:28.081 02:56:23 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:28.081 02:56:23 -- target/discovery.sh@57 -- # nvmftestfini 00:08:28.081 02:56:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:28.081 02:56:23 -- nvmf/common.sh@116 -- # sync 00:08:28.081 02:56:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:28.081 02:56:23 -- nvmf/common.sh@119 -- # set +e 00:08:28.081 02:56:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:28.081 02:56:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:28.081 rmmod nvme_tcp 00:08:28.081 rmmod nvme_fabrics 00:08:28.081 rmmod nvme_keyring 00:08:28.081 02:56:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:28.081 02:56:23 -- nvmf/common.sh@123 -- # set -e 00:08:28.081 02:56:23 -- nvmf/common.sh@124 -- # return 0 00:08:28.081 02:56:23 -- nvmf/common.sh@477 -- # '[' -n 1907847 ']' 00:08:28.081 02:56:23 -- nvmf/common.sh@478 -- # killprocess 1907847 00:08:28.081 02:56:23 -- common/autotest_common.sh@926 -- # '[' -z 1907847 ']' 00:08:28.081 02:56:23 -- common/autotest_common.sh@930 -- # kill -0 1907847 00:08:28.081 02:56:23 -- common/autotest_common.sh@931 -- # uname 00:08:28.081 02:56:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.081 02:56:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1907847 00:08:28.340 02:56:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.340 02:56:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.340 02:56:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1907847' 00:08:28.340 killing process with pid 1907847 00:08:28.340 02:56:23 -- common/autotest_common.sh@945 -- # kill 1907847 00:08:28.340 [2024-07-14 02:56:23.354838] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:28.340 02:56:23 -- common/autotest_common.sh@950 -- # wait 1907847 00:08:28.340 02:56:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:28.340 02:56:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:28.340 02:56:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:28.340 02:56:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:28.340 02:56:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:28.340 02:56:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.340 02:56:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.340 02:56:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.876 02:56:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:30.876 00:08:30.876 real 0m5.923s 00:08:30.876 user 0m7.134s 00:08:30.876 sys 0m1.801s 00:08:30.876 02:56:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.876 02:56:25 -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 ************************************ 00:08:30.876 END TEST nvmf_discovery 00:08:30.876 ************************************ 00:08:30.876 02:56:25 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:30.876 02:56:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:30.876 02:56:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.876 02:56:25 -- common/autotest_common.sh@10 -- # set +x 00:08:30.876 ************************************ 00:08:30.876 START TEST nvmf_referrals 00:08:30.876 ************************************ 00:08:30.876 02:56:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:30.876 * Looking for test storage... 00:08:30.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:30.876 02:56:25 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:30.876 02:56:25 -- nvmf/common.sh@7 -- # uname -s 00:08:30.876 02:56:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:30.876 02:56:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:30.876 02:56:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:30.876 02:56:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:30.876 02:56:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:30.876 02:56:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:30.876 02:56:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:30.876 02:56:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:30.876 02:56:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:30.876 02:56:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:30.876 02:56:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.876 02:56:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:30.876 02:56:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:30.876 02:56:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:30.876 02:56:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:30.876 02:56:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:30.876 02:56:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:30.876 02:56:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:30.876 02:56:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:30.876 02:56:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.876 02:56:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.876 02:56:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.876 02:56:25 -- paths/export.sh@5 -- # export PATH 00:08:30.876 02:56:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:30.876 02:56:25 -- nvmf/common.sh@46 -- # : 0 00:08:30.876 02:56:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:30.876 02:56:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:30.876 02:56:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:30.876 02:56:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:30.876 02:56:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:30.876 02:56:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:30.876 02:56:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:30.876 02:56:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:30.876 02:56:25 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:30.876 02:56:25 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:30.876 02:56:25 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:30.876 02:56:25 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:30.876 02:56:25 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:30.876 02:56:25 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:30.876 02:56:25 -- target/referrals.sh@37 -- # nvmftestinit 00:08:30.876 02:56:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:30.876 02:56:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:30.876 02:56:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:30.876 02:56:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:30.876 02:56:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:30.876 02:56:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:30.876 02:56:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:30.876 02:56:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:30.876 02:56:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:30.876 02:56:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:30.876 02:56:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:30.876 02:56:25 -- common/autotest_common.sh@10 -- # set +x 00:08:32.777 02:56:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:32.777 02:56:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:32.777 02:56:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:32.777 02:56:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:32.777 02:56:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:32.777 02:56:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:32.777 02:56:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:32.777 02:56:27 -- nvmf/common.sh@294 -- # net_devs=() 00:08:32.777 02:56:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:32.777 02:56:27 -- nvmf/common.sh@295 -- # e810=() 00:08:32.777 02:56:27 -- nvmf/common.sh@295 -- # local -ga e810 00:08:32.777 02:56:27 -- nvmf/common.sh@296 -- # x722=() 00:08:32.777 02:56:27 -- nvmf/common.sh@296 -- # local -ga x722 00:08:32.777 02:56:27 -- nvmf/common.sh@297 -- # mlx=() 00:08:32.777 02:56:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:32.777 02:56:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:32.777 02:56:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:32.777 02:56:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:32.777 02:56:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:32.777 02:56:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.777 02:56:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:32.777 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:32.777 02:56:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:32.777 02:56:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:32.777 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:32.777 02:56:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:32.777 02:56:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:32.777 02:56:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.778 02:56:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.778 02:56:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.778 02:56:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.778 02:56:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:32.778 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:32.778 02:56:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.778 02:56:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:32.778 02:56:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:32.778 02:56:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:32.778 02:56:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:32.778 02:56:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:32.778 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:32.778 02:56:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:32.778 02:56:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:32.778 02:56:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:32.778 02:56:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:32.778 02:56:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.778 02:56:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.778 02:56:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:32.778 02:56:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:32.778 02:56:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:32.778 02:56:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:32.778 02:56:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:32.778 02:56:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:32.778 02:56:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.778 02:56:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:32.778 02:56:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:32.778 02:56:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:32.778 02:56:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:32.778 02:56:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:32.778 02:56:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:32.778 02:56:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:32.778 02:56:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:32.778 02:56:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:32.778 02:56:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:32.778 02:56:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:32.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:08:32.778 00:08:32.778 --- 10.0.0.2 ping statistics --- 00:08:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.778 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:08:32.778 02:56:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:32.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:08:32.778 00:08:32.778 --- 10.0.0.1 ping statistics --- 00:08:32.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.778 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:08:32.778 02:56:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.778 02:56:27 -- nvmf/common.sh@410 -- # return 0 00:08:32.778 02:56:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.778 02:56:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.778 02:56:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.778 02:56:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.778 02:56:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.778 02:56:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.778 02:56:27 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:32.778 02:56:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.778 02:56:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.778 02:56:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.778 02:56:27 -- nvmf/common.sh@469 -- # nvmfpid=1909966 00:08:32.778 02:56:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.778 02:56:27 -- nvmf/common.sh@470 -- # waitforlisten 1909966 00:08:32.778 02:56:27 -- common/autotest_common.sh@819 -- # '[' -z 1909966 ']' 00:08:32.778 02:56:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.778 02:56:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.778 02:56:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.778 02:56:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.778 02:56:27 -- common/autotest_common.sh@10 -- # set +x 00:08:32.778 [2024-07-14 02:56:27.847905] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:32.778 [2024-07-14 02:56:27.847991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.778 EAL: No free 2048 kB hugepages reported on node 1 00:08:32.778 [2024-07-14 02:56:27.916297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.778 [2024-07-14 02:56:28.008249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.778 [2024-07-14 02:56:28.008422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.778 [2024-07-14 02:56:28.008441] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.778 [2024-07-14 02:56:28.008458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.778 [2024-07-14 02:56:28.008558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.778 [2024-07-14 02:56:28.008617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.778 [2024-07-14 02:56:28.008672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.778 [2024-07-14 02:56:28.008674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.712 02:56:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.712 02:56:28 -- common/autotest_common.sh@852 -- # return 0 00:08:33.712 02:56:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.712 02:56:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.712 02:56:28 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 [2024-07-14 02:56:28.792443] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 [2024-07-14 02:56:28.804628] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.712 02:56:28 -- target/referrals.sh@48 -- # jq length 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:33.712 02:56:28 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:33.712 02:56:28 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.712 02:56:28 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.712 02:56:28 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.712 02:56:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.712 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:08:33.712 02:56:28 -- target/referrals.sh@21 -- # sort 00:08:33.712 02:56:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.712 02:56:28 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.712 02:56:28 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.712 02:56:28 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.712 02:56:28 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.712 02:56:28 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.712 02:56:28 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.712 02:56:28 -- target/referrals.sh@26 -- # sort 00:08:33.970 02:56:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.970 02:56:29 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.970 02:56:29 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.970 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.970 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.970 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.970 02:56:29 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.970 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.970 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.970 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.970 02:56:29 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.970 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.970 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.970 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.970 02:56:29 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.970 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.970 02:56:29 -- target/referrals.sh@56 -- # jq length 00:08:33.970 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:33.970 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.970 02:56:29 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:33.970 02:56:29 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:33.970 02:56:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.970 02:56:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.970 02:56:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.970 02:56:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.970 02:56:29 -- target/referrals.sh@26 -- # sort 00:08:34.227 02:56:29 -- target/referrals.sh@26 -- # echo 00:08:34.227 02:56:29 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:34.227 02:56:29 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:34.227 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.227 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.227 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.227 02:56:29 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.227 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.227 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.227 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.227 02:56:29 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:34.227 02:56:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.227 02:56:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.227 02:56:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.227 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.227 02:56:29 -- target/referrals.sh@21 -- # sort 00:08:34.227 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.227 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.227 02:56:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:34.227 02:56:29 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.227 02:56:29 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:34.227 02:56:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.227 02:56:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.227 02:56:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.227 02:56:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.227 02:56:29 -- target/referrals.sh@26 -- # sort 00:08:34.227 02:56:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:34.227 02:56:29 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.227 02:56:29 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:34.227 02:56:29 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:34.227 02:56:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.227 02:56:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.227 02:56:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.483 02:56:29 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.483 02:56:29 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.483 02:56:29 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:34.483 02:56:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.483 02:56:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.483 02:56:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.483 02:56:29 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.483 02:56:29 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.483 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.483 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.483 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.483 02:56:29 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:34.483 02:56:29 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.483 02:56:29 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.483 02:56:29 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.483 02:56:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.483 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:08:34.483 02:56:29 -- target/referrals.sh@21 -- # sort 00:08:34.483 02:56:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.483 02:56:29 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:34.483 02:56:29 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.483 02:56:29 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:34.483 02:56:29 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.483 02:56:29 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.483 02:56:29 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.483 02:56:29 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.483 02:56:29 -- target/referrals.sh@26 -- # sort 00:08:34.740 02:56:29 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.740 02:56:29 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.740 02:56:29 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.740 02:56:29 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.740 02:56:29 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.740 02:56:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.740 02:56:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.740 02:56:29 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.740 02:56:29 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.740 02:56:29 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.740 02:56:29 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.740 02:56:29 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.740 02:56:29 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.998 02:56:30 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.998 02:56:30 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.998 02:56:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.998 02:56:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.998 02:56:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.998 02:56:30 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.998 02:56:30 -- target/referrals.sh@82 -- # jq length 00:08:34.998 02:56:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.998 02:56:30 -- common/autotest_common.sh@10 -- # set +x 00:08:34.998 02:56:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.998 02:56:30 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.998 02:56:30 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.998 02:56:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.998 02:56:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.998 02:56:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.998 02:56:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.998 02:56:30 -- target/referrals.sh@26 -- # sort 00:08:35.256 02:56:30 -- target/referrals.sh@26 -- # echo 00:08:35.256 02:56:30 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:35.256 02:56:30 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:35.256 02:56:30 -- target/referrals.sh@86 -- # nvmftestfini 00:08:35.256 02:56:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:35.256 02:56:30 -- nvmf/common.sh@116 -- # sync 00:08:35.256 02:56:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:35.256 02:56:30 -- nvmf/common.sh@119 -- # set +e 00:08:35.256 02:56:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:35.256 02:56:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:35.256 rmmod nvme_tcp 00:08:35.256 rmmod nvme_fabrics 00:08:35.256 rmmod nvme_keyring 00:08:35.256 02:56:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:35.256 02:56:30 -- nvmf/common.sh@123 -- # set -e 00:08:35.256 02:56:30 -- nvmf/common.sh@124 -- # return 0 00:08:35.256 02:56:30 -- nvmf/common.sh@477 -- # '[' -n 1909966 ']' 00:08:35.256 02:56:30 -- nvmf/common.sh@478 -- # killprocess 1909966 00:08:35.256 02:56:30 -- common/autotest_common.sh@926 -- # '[' -z 1909966 ']' 00:08:35.256 02:56:30 -- common/autotest_common.sh@930 -- # kill -0 1909966 00:08:35.256 02:56:30 -- common/autotest_common.sh@931 -- # uname 00:08:35.256 02:56:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:35.256 02:56:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1909966 00:08:35.256 02:56:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:35.256 02:56:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:35.256 02:56:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1909966' 00:08:35.256 killing process with pid 1909966 00:08:35.256 02:56:30 -- common/autotest_common.sh@945 -- # kill 1909966 00:08:35.256 02:56:30 -- common/autotest_common.sh@950 -- # wait 1909966 00:08:35.515 02:56:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:35.515 02:56:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:35.515 02:56:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:35.515 02:56:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.515 02:56:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:35.515 02:56:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.515 02:56:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.515 02:56:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.419 02:56:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:37.419 00:08:37.419 real 0m6.992s 00:08:37.419 user 0m11.729s 00:08:37.419 sys 0m2.119s 00:08:37.419 02:56:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.419 02:56:32 -- common/autotest_common.sh@10 -- # set +x 00:08:37.419 ************************************ 00:08:37.419 END TEST nvmf_referrals 00:08:37.419 ************************************ 00:08:37.419 02:56:32 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.419 02:56:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:37.419 02:56:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:37.419 02:56:32 -- common/autotest_common.sh@10 -- # set +x 00:08:37.677 ************************************ 00:08:37.677 START TEST nvmf_connect_disconnect 00:08:37.677 ************************************ 00:08:37.677 02:56:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:37.677 * Looking for test storage... 00:08:37.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:37.677 02:56:32 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:37.677 02:56:32 -- nvmf/common.sh@7 -- # uname -s 00:08:37.677 02:56:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:37.677 02:56:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:37.677 02:56:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:37.677 02:56:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:37.677 02:56:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:37.677 02:56:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:37.677 02:56:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:37.677 02:56:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:37.677 02:56:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:37.677 02:56:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:37.677 02:56:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.677 02:56:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:37.677 02:56:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:37.677 02:56:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:37.677 02:56:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:37.677 02:56:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:37.677 02:56:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:37.677 02:56:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:37.677 02:56:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:37.677 02:56:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.677 02:56:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.678 02:56:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.678 02:56:32 -- paths/export.sh@5 -- # export PATH 00:08:37.678 02:56:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:37.678 02:56:32 -- nvmf/common.sh@46 -- # : 0 00:08:37.678 02:56:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:37.678 02:56:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:37.678 02:56:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:37.678 02:56:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:37.678 02:56:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:37.678 02:56:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:37.678 02:56:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:37.678 02:56:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:37.678 02:56:32 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:37.678 02:56:32 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:37.678 02:56:32 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:37.678 02:56:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:37.678 02:56:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:37.678 02:56:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:37.678 02:56:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:37.678 02:56:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:37.678 02:56:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.678 02:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:37.678 02:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:37.678 02:56:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:37.678 02:56:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:37.678 02:56:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:37.678 02:56:32 -- common/autotest_common.sh@10 -- # set +x 00:08:39.593 02:56:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:39.593 02:56:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:39.593 02:56:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:39.593 02:56:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:39.593 02:56:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:39.593 02:56:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:39.593 02:56:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:39.593 02:56:34 -- nvmf/common.sh@294 -- # net_devs=() 00:08:39.593 02:56:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:39.593 02:56:34 -- nvmf/common.sh@295 -- # e810=() 00:08:39.593 02:56:34 -- nvmf/common.sh@295 -- # local -ga e810 00:08:39.593 02:56:34 -- nvmf/common.sh@296 -- # x722=() 00:08:39.593 02:56:34 -- nvmf/common.sh@296 -- # local -ga x722 00:08:39.593 02:56:34 -- nvmf/common.sh@297 -- # mlx=() 00:08:39.593 02:56:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:39.593 02:56:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:39.593 02:56:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:39.593 02:56:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:39.593 02:56:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:39.593 02:56:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:39.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:39.593 02:56:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:39.593 02:56:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:39.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:39.593 02:56:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:39.593 02:56:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.593 02:56:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.593 02:56:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:39.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:39.593 02:56:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.593 02:56:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:39.593 02:56:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:39.593 02:56:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:39.593 02:56:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:39.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:39.593 02:56:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:39.593 02:56:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:39.593 02:56:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:39.593 02:56:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:39.593 02:56:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:39.593 02:56:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:39.593 02:56:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:39.593 02:56:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:39.593 02:56:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:39.593 02:56:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:39.593 02:56:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:39.593 02:56:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:39.593 02:56:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:39.593 02:56:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:39.593 02:56:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:39.593 02:56:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:39.593 02:56:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:39.593 02:56:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:39.593 02:56:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:39.593 02:56:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:39.593 02:56:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:39.852 02:56:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:39.852 02:56:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:39.852 02:56:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:39.852 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:39.852 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:08:39.852 00:08:39.852 --- 10.0.0.2 ping statistics --- 00:08:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.852 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:39.852 02:56:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:39.852 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:39.852 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:08:39.852 00:08:39.852 --- 10.0.0.1 ping statistics --- 00:08:39.852 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:39.852 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:08:39.852 02:56:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:39.852 02:56:34 -- nvmf/common.sh@410 -- # return 0 00:08:39.852 02:56:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:39.852 02:56:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:39.852 02:56:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:39.852 02:56:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:39.852 02:56:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:39.852 02:56:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:39.852 02:56:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:39.852 02:56:34 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:39.852 02:56:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:39.852 02:56:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:39.852 02:56:34 -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 02:56:34 -- nvmf/common.sh@469 -- # nvmfpid=1912345 00:08:39.852 02:56:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:39.852 02:56:34 -- nvmf/common.sh@470 -- # waitforlisten 1912345 00:08:39.852 02:56:34 -- common/autotest_common.sh@819 -- # '[' -z 1912345 ']' 00:08:39.852 02:56:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:39.852 02:56:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:39.852 02:56:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:39.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:39.852 02:56:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:39.852 02:56:34 -- common/autotest_common.sh@10 -- # set +x 00:08:39.852 [2024-07-14 02:56:34.973075] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:39.852 [2024-07-14 02:56:34.973165] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:39.852 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.852 [2024-07-14 02:56:35.046638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:40.111 [2024-07-14 02:56:35.134471] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:40.111 [2024-07-14 02:56:35.134655] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.111 [2024-07-14 02:56:35.134676] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.111 [2024-07-14 02:56:35.134689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.111 [2024-07-14 02:56:35.134744] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:40.111 [2024-07-14 02:56:35.134808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:40.111 [2024-07-14 02:56:35.134858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:40.111 [2024-07-14 02:56:35.134860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.676 02:56:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:40.676 02:56:35 -- common/autotest_common.sh@852 -- # return 0 00:08:40.676 02:56:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:40.676 02:56:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:40.676 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:56:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:40.934 02:56:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.934 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 [2024-07-14 02:56:35.947494] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:40.934 02:56:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:40.934 02:56:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.934 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:56:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:40.934 02:56:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.934 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:56:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:40.934 02:56:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.934 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 02:56:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.934 02:56:35 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:40.934 02:56:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:40.934 02:56:35 -- common/autotest_common.sh@10 -- # set +x 00:08:40.934 [2024-07-14 02:56:36.000530] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:40.934 02:56:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:40.934 02:56:36 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:40.934 02:56:36 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:40.934 02:56:36 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:40.934 02:56:36 -- target/connect_disconnect.sh@34 -- # set +x 00:08:43.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.331 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.249 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.780 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.568 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.017 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.533 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.373 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.906 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.879 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.887 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.408 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.790 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.233 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.730 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.123 03:00:26 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:32.123 03:00:26 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:32.123 03:00:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:32.123 03:00:26 -- nvmf/common.sh@116 -- # sync 00:12:32.123 03:00:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:32.123 03:00:26 -- nvmf/common.sh@119 -- # set +e 00:12:32.123 03:00:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:32.123 03:00:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:32.123 rmmod nvme_tcp 00:12:32.123 rmmod nvme_fabrics 00:12:32.123 rmmod nvme_keyring 00:12:32.123 03:00:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:32.123 03:00:26 -- nvmf/common.sh@123 -- # set -e 00:12:32.123 03:00:26 -- nvmf/common.sh@124 -- # return 0 00:12:32.123 03:00:26 -- nvmf/common.sh@477 -- # '[' -n 1912345 ']' 00:12:32.123 03:00:26 -- nvmf/common.sh@478 -- # killprocess 1912345 00:12:32.123 03:00:26 -- common/autotest_common.sh@926 -- # '[' -z 1912345 ']' 00:12:32.123 03:00:26 -- common/autotest_common.sh@930 -- # kill -0 1912345 00:12:32.123 03:00:26 -- common/autotest_common.sh@931 -- # uname 00:12:32.123 03:00:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:32.123 03:00:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1912345 00:12:32.123 03:00:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:32.123 03:00:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:32.123 03:00:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1912345' 00:12:32.123 killing process with pid 1912345 00:12:32.123 03:00:26 -- common/autotest_common.sh@945 -- # kill 1912345 00:12:32.123 03:00:26 -- common/autotest_common.sh@950 -- # wait 1912345 00:12:32.123 03:00:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:32.123 03:00:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:32.123 03:00:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:32.123 03:00:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.123 03:00:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:32.123 03:00:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.123 03:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.123 03:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.081 03:00:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:34.081 00:12:34.081 real 3m56.605s 00:12:34.081 user 15m1.232s 00:12:34.081 sys 0m34.743s 00:12:34.081 03:00:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:34.081 03:00:29 -- common/autotest_common.sh@10 -- # set +x 00:12:34.081 ************************************ 00:12:34.081 END TEST nvmf_connect_disconnect 00:12:34.081 ************************************ 00:12:34.081 03:00:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:34.081 03:00:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:34.081 03:00:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:34.081 03:00:29 -- common/autotest_common.sh@10 -- # set +x 00:12:34.081 ************************************ 00:12:34.081 START TEST nvmf_multitarget 00:12:34.081 ************************************ 00:12:34.081 03:00:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:34.346 * Looking for test storage... 00:12:34.346 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:34.346 03:00:29 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:34.346 03:00:29 -- nvmf/common.sh@7 -- # uname -s 00:12:34.346 03:00:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.346 03:00:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.346 03:00:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.346 03:00:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.346 03:00:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.346 03:00:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.346 03:00:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.346 03:00:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.346 03:00:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.346 03:00:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.346 03:00:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.346 03:00:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:34.346 03:00:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.346 03:00:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.346 03:00:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:34.346 03:00:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:34.346 03:00:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.346 03:00:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.346 03:00:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.346 03:00:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.346 03:00:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.346 03:00:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.346 03:00:29 -- paths/export.sh@5 -- # export PATH 00:12:34.346 03:00:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.346 03:00:29 -- nvmf/common.sh@46 -- # : 0 00:12:34.346 03:00:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:34.346 03:00:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:34.346 03:00:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:34.346 03:00:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.346 03:00:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.346 03:00:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:34.346 03:00:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:34.346 03:00:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:34.346 03:00:29 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:34.346 03:00:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:34.346 03:00:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:34.346 03:00:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:34.346 03:00:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:34.346 03:00:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:34.346 03:00:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:34.346 03:00:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.346 03:00:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.346 03:00:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.346 03:00:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:34.346 03:00:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:34.346 03:00:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:34.346 03:00:29 -- common/autotest_common.sh@10 -- # set +x 00:12:36.256 03:00:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:36.256 03:00:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:36.256 03:00:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:36.257 03:00:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:36.257 03:00:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:36.257 03:00:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:36.257 03:00:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:36.257 03:00:31 -- nvmf/common.sh@294 -- # net_devs=() 00:12:36.257 03:00:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:36.257 03:00:31 -- nvmf/common.sh@295 -- # e810=() 00:12:36.257 03:00:31 -- nvmf/common.sh@295 -- # local -ga e810 00:12:36.257 03:00:31 -- nvmf/common.sh@296 -- # x722=() 00:12:36.257 03:00:31 -- nvmf/common.sh@296 -- # local -ga x722 00:12:36.257 03:00:31 -- nvmf/common.sh@297 -- # mlx=() 00:12:36.257 03:00:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:36.257 03:00:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.257 03:00:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.257 03:00:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:36.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:36.257 03:00:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:36.257 03:00:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:36.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:36.257 03:00:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.257 03:00:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.257 03:00:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.257 03:00:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:36.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:36.257 03:00:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:36.257 03:00:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.257 03:00:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.257 03:00:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:36.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:36.257 03:00:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:36.257 03:00:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:36.257 03:00:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.257 03:00:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.257 03:00:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:36.257 03:00:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.257 03:00:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.257 03:00:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:36.257 03:00:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.257 03:00:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.257 03:00:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:36.257 03:00:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:36.257 03:00:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.257 03:00:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.257 03:00:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.257 03:00:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.257 03:00:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:36.257 03:00:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.257 03:00:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.257 03:00:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.257 03:00:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:36.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:12:36.257 00:12:36.257 --- 10.0.0.2 ping statistics --- 00:12:36.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.257 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:12:36.257 03:00:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:12:36.257 00:12:36.257 --- 10.0.0.1 ping statistics --- 00:12:36.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.257 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:12:36.257 03:00:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.257 03:00:31 -- nvmf/common.sh@410 -- # return 0 00:12:36.257 03:00:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:36.257 03:00:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.257 03:00:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:36.257 03:00:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.257 03:00:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:36.257 03:00:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:36.257 03:00:31 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:36.257 03:00:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:36.257 03:00:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:36.257 03:00:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.257 03:00:31 -- nvmf/common.sh@469 -- # nvmfpid=1945058 00:12:36.257 03:00:31 -- nvmf/common.sh@470 -- # waitforlisten 1945058 00:12:36.257 03:00:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.257 03:00:31 -- common/autotest_common.sh@819 -- # '[' -z 1945058 ']' 00:12:36.257 03:00:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.257 03:00:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.257 03:00:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.257 03:00:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.257 03:00:31 -- common/autotest_common.sh@10 -- # set +x 00:12:36.257 [2024-07-14 03:00:31.489560] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:36.257 [2024-07-14 03:00:31.489645] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.515 [2024-07-14 03:00:31.559980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.515 [2024-07-14 03:00:31.653118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:36.515 [2024-07-14 03:00:31.653297] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.515 [2024-07-14 03:00:31.653327] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.515 [2024-07-14 03:00:31.653350] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.515 [2024-07-14 03:00:31.653427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.515 [2024-07-14 03:00:31.656888] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.515 [2024-07-14 03:00:31.656925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.515 [2024-07-14 03:00:31.656930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.455 03:00:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.455 03:00:32 -- common/autotest_common.sh@852 -- # return 0 00:12:37.455 03:00:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:37.455 03:00:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:37.455 03:00:32 -- common/autotest_common.sh@10 -- # set +x 00:12:37.455 03:00:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.455 03:00:32 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.455 03:00:32 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.455 03:00:32 -- target/multitarget.sh@21 -- # jq length 00:12:37.455 03:00:32 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:37.455 03:00:32 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:37.455 "nvmf_tgt_1" 00:12:37.455 03:00:32 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:37.713 "nvmf_tgt_2" 00:12:37.713 03:00:32 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.713 03:00:32 -- target/multitarget.sh@28 -- # jq length 00:12:37.713 03:00:32 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:37.713 03:00:32 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:37.971 true 00:12:37.971 03:00:33 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:37.971 true 00:12:37.971 03:00:33 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:37.971 03:00:33 -- target/multitarget.sh@35 -- # jq length 00:12:37.971 03:00:33 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:37.971 03:00:33 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:37.971 03:00:33 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:37.971 03:00:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:37.971 03:00:33 -- nvmf/common.sh@116 -- # sync 00:12:37.971 03:00:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:37.971 03:00:33 -- nvmf/common.sh@119 -- # set +e 00:12:37.971 03:00:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:37.971 03:00:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:38.228 rmmod nvme_tcp 00:12:38.228 rmmod nvme_fabrics 00:12:38.228 rmmod nvme_keyring 00:12:38.228 03:00:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:38.228 03:00:33 -- nvmf/common.sh@123 -- # set -e 00:12:38.228 03:00:33 -- nvmf/common.sh@124 -- # return 0 00:12:38.228 03:00:33 -- nvmf/common.sh@477 -- # '[' -n 1945058 ']' 00:12:38.228 03:00:33 -- nvmf/common.sh@478 -- # killprocess 1945058 00:12:38.228 03:00:33 -- common/autotest_common.sh@926 -- # '[' -z 1945058 ']' 00:12:38.228 03:00:33 -- common/autotest_common.sh@930 -- # kill -0 1945058 00:12:38.228 03:00:33 -- common/autotest_common.sh@931 -- # uname 00:12:38.228 03:00:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.228 03:00:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1945058 00:12:38.228 03:00:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:38.228 03:00:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:38.228 03:00:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1945058' 00:12:38.228 killing process with pid 1945058 00:12:38.228 03:00:33 -- common/autotest_common.sh@945 -- # kill 1945058 00:12:38.228 03:00:33 -- common/autotest_common.sh@950 -- # wait 1945058 00:12:38.537 03:00:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:38.537 03:00:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:38.537 03:00:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:38.537 03:00:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.537 03:00:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:38.537 03:00:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.537 03:00:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.537 03:00:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.442 03:00:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:40.442 00:12:40.442 real 0m6.264s 00:12:40.443 user 0m9.049s 00:12:40.443 sys 0m1.873s 00:12:40.443 03:00:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:40.443 03:00:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.443 ************************************ 00:12:40.443 END TEST nvmf_multitarget 00:12:40.443 ************************************ 00:12:40.443 03:00:35 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.443 03:00:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:40.443 03:00:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:40.443 03:00:35 -- common/autotest_common.sh@10 -- # set +x 00:12:40.443 ************************************ 00:12:40.443 START TEST nvmf_rpc 00:12:40.443 ************************************ 00:12:40.443 03:00:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:40.443 * Looking for test storage... 00:12:40.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:40.443 03:00:35 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:40.443 03:00:35 -- nvmf/common.sh@7 -- # uname -s 00:12:40.443 03:00:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:40.443 03:00:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:40.443 03:00:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:40.443 03:00:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:40.443 03:00:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:40.443 03:00:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:40.443 03:00:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:40.443 03:00:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:40.443 03:00:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:40.443 03:00:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:40.443 03:00:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.443 03:00:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:40.443 03:00:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:40.443 03:00:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:40.443 03:00:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:40.443 03:00:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:40.443 03:00:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:40.443 03:00:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:40.443 03:00:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:40.443 03:00:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.443 03:00:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.443 03:00:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.443 03:00:35 -- paths/export.sh@5 -- # export PATH 00:12:40.443 03:00:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:40.443 03:00:35 -- nvmf/common.sh@46 -- # : 0 00:12:40.443 03:00:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:40.443 03:00:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:40.443 03:00:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:40.443 03:00:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:40.443 03:00:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:40.443 03:00:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:40.443 03:00:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:40.443 03:00:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:40.443 03:00:35 -- target/rpc.sh@11 -- # loops=5 00:12:40.443 03:00:35 -- target/rpc.sh@23 -- # nvmftestinit 00:12:40.443 03:00:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:40.443 03:00:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:40.443 03:00:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:40.443 03:00:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:40.443 03:00:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:40.443 03:00:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.443 03:00:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:40.443 03:00:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.443 03:00:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:40.443 03:00:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:40.443 03:00:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:40.443 03:00:35 -- common/autotest_common.sh@10 -- # set +x 00:12:42.350 03:00:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:42.350 03:00:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:42.350 03:00:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:42.350 03:00:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:42.350 03:00:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:42.350 03:00:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:42.350 03:00:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:42.350 03:00:37 -- nvmf/common.sh@294 -- # net_devs=() 00:12:42.350 03:00:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:42.350 03:00:37 -- nvmf/common.sh@295 -- # e810=() 00:12:42.350 03:00:37 -- nvmf/common.sh@295 -- # local -ga e810 00:12:42.350 03:00:37 -- nvmf/common.sh@296 -- # x722=() 00:12:42.350 03:00:37 -- nvmf/common.sh@296 -- # local -ga x722 00:12:42.350 03:00:37 -- nvmf/common.sh@297 -- # mlx=() 00:12:42.350 03:00:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:42.350 03:00:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:42.350 03:00:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:42.350 03:00:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:42.350 03:00:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:42.350 03:00:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:42.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:42.350 03:00:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:42.350 03:00:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:42.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:42.350 03:00:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:42.350 03:00:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.350 03:00:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.350 03:00:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:42.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:42.350 03:00:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.350 03:00:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:42.350 03:00:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:42.350 03:00:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:42.350 03:00:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:42.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:42.350 03:00:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:42.350 03:00:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:42.350 03:00:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:42.350 03:00:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:42.350 03:00:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:42.350 03:00:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:42.350 03:00:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:42.350 03:00:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:42.350 03:00:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:42.350 03:00:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:42.350 03:00:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:42.350 03:00:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:42.350 03:00:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:42.350 03:00:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:42.350 03:00:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:42.350 03:00:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:42.350 03:00:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:42.350 03:00:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:42.350 03:00:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:42.611 03:00:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:42.611 03:00:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:42.611 03:00:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:42.611 03:00:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:42.611 03:00:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:42.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:42.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:42.611 00:12:42.611 --- 10.0.0.2 ping statistics --- 00:12:42.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.611 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:42.611 03:00:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:42.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:42.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:42.611 00:12:42.611 --- 10.0.0.1 ping statistics --- 00:12:42.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:42.611 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:42.611 03:00:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:42.611 03:00:37 -- nvmf/common.sh@410 -- # return 0 00:12:42.611 03:00:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:42.611 03:00:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:42.611 03:00:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:42.611 03:00:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:42.611 03:00:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:42.611 03:00:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:42.611 03:00:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:42.611 03:00:37 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:42.611 03:00:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:42.611 03:00:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:42.611 03:00:37 -- common/autotest_common.sh@10 -- # set +x 00:12:42.611 03:00:37 -- nvmf/common.sh@469 -- # nvmfpid=1947177 00:12:42.611 03:00:37 -- nvmf/common.sh@470 -- # waitforlisten 1947177 00:12:42.611 03:00:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.611 03:00:37 -- common/autotest_common.sh@819 -- # '[' -z 1947177 ']' 00:12:42.611 03:00:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.611 03:00:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:42.611 03:00:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.611 03:00:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:42.611 03:00:37 -- common/autotest_common.sh@10 -- # set +x 00:12:42.611 [2024-07-14 03:00:37.742034] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:42.611 [2024-07-14 03:00:37.742123] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.611 EAL: No free 2048 kB hugepages reported on node 1 00:12:42.611 [2024-07-14 03:00:37.820641] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.870 [2024-07-14 03:00:37.918010] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:42.870 [2024-07-14 03:00:37.918180] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.870 [2024-07-14 03:00:37.918208] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.870 [2024-07-14 03:00:37.918232] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.870 [2024-07-14 03:00:37.920894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.870 [2024-07-14 03:00:37.920924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.870 [2024-07-14 03:00:37.920977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.870 [2024-07-14 03:00:37.920980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.806 03:00:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:43.806 03:00:38 -- common/autotest_common.sh@852 -- # return 0 00:12:43.806 03:00:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:43.806 03:00:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:43.806 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.806 03:00:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:43.806 03:00:38 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:43.806 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.806 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.806 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.806 03:00:38 -- target/rpc.sh@26 -- # stats='{ 00:12:43.806 "tick_rate": 2700000000, 00:12:43.806 "poll_groups": [ 00:12:43.806 { 00:12:43.806 "name": "nvmf_tgt_poll_group_0", 00:12:43.806 "admin_qpairs": 0, 00:12:43.806 "io_qpairs": 0, 00:12:43.806 "current_admin_qpairs": 0, 00:12:43.806 "current_io_qpairs": 0, 00:12:43.806 "pending_bdev_io": 0, 00:12:43.806 "completed_nvme_io": 0, 00:12:43.806 "transports": [] 00:12:43.806 }, 00:12:43.806 { 00:12:43.806 "name": "nvmf_tgt_poll_group_1", 00:12:43.806 "admin_qpairs": 0, 00:12:43.806 "io_qpairs": 0, 00:12:43.806 "current_admin_qpairs": 0, 00:12:43.806 "current_io_qpairs": 0, 00:12:43.806 "pending_bdev_io": 0, 00:12:43.806 "completed_nvme_io": 0, 00:12:43.806 "transports": [] 00:12:43.806 }, 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_2", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [] 00:12:43.807 }, 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_3", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [] 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 }' 00:12:43.807 03:00:38 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:43.807 03:00:38 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:43.807 03:00:38 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:43.807 03:00:38 -- target/rpc.sh@15 -- # wc -l 00:12:43.807 03:00:38 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:43.807 03:00:38 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:43.807 03:00:38 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:43.807 03:00:38 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 [2024-07-14 03:00:38.830753] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@33 -- # stats='{ 00:12:43.807 "tick_rate": 2700000000, 00:12:43.807 "poll_groups": [ 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_0", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [ 00:12:43.807 { 00:12:43.807 "trtype": "TCP" 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 }, 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_1", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [ 00:12:43.807 { 00:12:43.807 "trtype": "TCP" 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 }, 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_2", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [ 00:12:43.807 { 00:12:43.807 "trtype": "TCP" 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 }, 00:12:43.807 { 00:12:43.807 "name": "nvmf_tgt_poll_group_3", 00:12:43.807 "admin_qpairs": 0, 00:12:43.807 "io_qpairs": 0, 00:12:43.807 "current_admin_qpairs": 0, 00:12:43.807 "current_io_qpairs": 0, 00:12:43.807 "pending_bdev_io": 0, 00:12:43.807 "completed_nvme_io": 0, 00:12:43.807 "transports": [ 00:12:43.807 { 00:12:43.807 "trtype": "TCP" 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 } 00:12:43.807 ] 00:12:43.807 }' 00:12:43.807 03:00:38 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.807 03:00:38 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:43.807 03:00:38 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.807 03:00:38 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.807 03:00:38 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:43.807 03:00:38 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:43.807 03:00:38 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:43.807 03:00:38 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:43.807 03:00:38 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 Malloc1 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.807 03:00:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:38 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 [2024-07-14 03:00:38.988101] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.807 03:00:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:38 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:43.807 03:00:38 -- common/autotest_common.sh@640 -- # local es=0 00:12:43.807 03:00:38 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:43.807 03:00:38 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:43.807 03:00:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.807 03:00:38 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:43.807 03:00:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.807 03:00:38 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:43.807 03:00:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:43.807 03:00:38 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:43.807 03:00:38 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.807 03:00:38 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:43.807 [2024-07-14 03:00:39.010704] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:43.807 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:43.807 could not add new controller: failed to write to nvme-fabrics device 00:12:43.807 03:00:39 -- common/autotest_common.sh@643 -- # es=1 00:12:43.807 03:00:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:43.807 03:00:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:43.807 03:00:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:43.807 03:00:39 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:43.807 03:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:43.807 03:00:39 -- common/autotest_common.sh@10 -- # set +x 00:12:43.807 03:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:43.807 03:00:39 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.742 03:00:39 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.742 03:00:39 -- common/autotest_common.sh@1177 -- # local i=0 00:12:44.743 03:00:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.743 03:00:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:44.743 03:00:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:46.651 03:00:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:46.651 03:00:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:46.651 03:00:41 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.651 03:00:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:46.651 03:00:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.651 03:00:41 -- common/autotest_common.sh@1187 -- # return 0 00:12:46.651 03:00:41 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:46.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.651 03:00:41 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:46.651 03:00:41 -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.651 03:00:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:46.651 03:00:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.651 03:00:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:46.651 03:00:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:46.651 03:00:41 -- common/autotest_common.sh@1210 -- # return 0 00:12:46.651 03:00:41 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:46.651 03:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.651 03:00:41 -- common/autotest_common.sh@10 -- # set +x 00:12:46.651 03:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.651 03:00:41 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.651 03:00:41 -- common/autotest_common.sh@640 -- # local es=0 00:12:46.651 03:00:41 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.651 03:00:41 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:46.651 03:00:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:46.651 03:00:41 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:46.651 03:00:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:46.651 03:00:41 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:46.651 03:00:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:46.651 03:00:41 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:46.651 03:00:41 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:46.651 03:00:41 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.651 [2024-07-14 03:00:41.744414] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:46.651 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:46.651 could not add new controller: failed to write to nvme-fabrics device 00:12:46.651 03:00:41 -- common/autotest_common.sh@643 -- # es=1 00:12:46.651 03:00:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:46.651 03:00:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:46.651 03:00:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:46.651 03:00:41 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:46.651 03:00:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.651 03:00:41 -- common/autotest_common.sh@10 -- # set +x 00:12:46.651 03:00:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.651 03:00:41 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.220 03:00:42 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.220 03:00:42 -- common/autotest_common.sh@1177 -- # local i=0 00:12:47.220 03:00:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.220 03:00:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:47.220 03:00:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:49.756 03:00:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:49.756 03:00:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:49.756 03:00:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.756 03:00:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:49.756 03:00:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.756 03:00:44 -- common/autotest_common.sh@1187 -- # return 0 00:12:49.756 03:00:44 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.756 03:00:44 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.756 03:00:44 -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.756 03:00:44 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:49.756 03:00:44 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.756 03:00:44 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:49.756 03:00:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.756 03:00:44 -- common/autotest_common.sh@1210 -- # return 0 00:12:49.756 03:00:44 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.756 03:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.756 03:00:44 -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 03:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.756 03:00:44 -- target/rpc.sh@81 -- # seq 1 5 00:12:49.756 03:00:44 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.756 03:00:44 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.756 03:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.756 03:00:44 -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 03:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.756 03:00:44 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.756 03:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.756 03:00:44 -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 [2024-07-14 03:00:44.549083] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.756 03:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.756 03:00:44 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.756 03:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.756 03:00:44 -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 03:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.756 03:00:44 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.756 03:00:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.756 03:00:44 -- common/autotest_common.sh@10 -- # set +x 00:12:49.756 03:00:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.756 03:00:44 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.015 03:00:45 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.015 03:00:45 -- common/autotest_common.sh@1177 -- # local i=0 00:12:50.015 03:00:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.015 03:00:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:50.015 03:00:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:52.586 03:00:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:52.586 03:00:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:52.586 03:00:47 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.586 03:00:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:52.586 03:00:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.586 03:00:47 -- common/autotest_common.sh@1187 -- # return 0 00:12:52.586 03:00:47 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.586 03:00:47 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.586 03:00:47 -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.586 03:00:47 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:52.586 03:00:47 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.586 03:00:47 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:52.586 03:00:47 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.586 03:00:47 -- common/autotest_common.sh@1210 -- # return 0 00:12:52.586 03:00:47 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:52.586 03:00:47 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 [2024-07-14 03:00:47.396261] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.586 03:00:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.586 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:12:52.586 03:00:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.586 03:00:47 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:52.845 03:00:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:52.845 03:00:48 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.845 03:00:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.845 03:00:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.845 03:00:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:55.380 03:00:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:55.380 03:00:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:55.380 03:00:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.380 03:00:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:55.380 03:00:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.380 03:00:50 -- common/autotest_common.sh@1187 -- # return 0 00:12:55.380 03:00:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.380 03:00:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.380 03:00:50 -- common/autotest_common.sh@1198 -- # local i=0 00:12:55.380 03:00:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:55.380 03:00:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.380 03:00:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:55.380 03:00:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.380 03:00:50 -- common/autotest_common.sh@1210 -- # return 0 00:12:55.380 03:00:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.380 03:00:50 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 [2024-07-14 03:00:50.202080] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.380 03:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.380 03:00:50 -- common/autotest_common.sh@10 -- # set +x 00:12:55.380 03:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.380 03:00:50 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.638 03:00:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.638 03:00:50 -- common/autotest_common.sh@1177 -- # local i=0 00:12:55.638 03:00:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.638 03:00:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:55.638 03:00:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:58.171 03:00:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:58.171 03:00:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:58.171 03:00:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.171 03:00:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:58.171 03:00:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.171 03:00:52 -- common/autotest_common.sh@1187 -- # return 0 00:12:58.171 03:00:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.171 03:00:52 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.171 03:00:52 -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.171 03:00:52 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:58.172 03:00:52 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.172 03:00:52 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:58.172 03:00:52 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.172 03:00:52 -- common/autotest_common.sh@1210 -- # return 0 00:12:58.172 03:00:52 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:58.172 03:00:52 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 [2024-07-14 03:00:52.980217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:58.172 03:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:58.172 03:00:52 -- common/autotest_common.sh@10 -- # set +x 00:12:58.172 03:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:58.172 03:00:52 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.430 03:00:53 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.430 03:00:53 -- common/autotest_common.sh@1177 -- # local i=0 00:12:58.430 03:00:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.430 03:00:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:58.430 03:00:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:00.337 03:00:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:00.597 03:00:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:00.597 03:00:55 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.597 03:00:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:00.597 03:00:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.597 03:00:55 -- common/autotest_common.sh@1187 -- # return 0 00:13:00.597 03:00:55 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.597 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.597 03:00:55 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.597 03:00:55 -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.597 03:00:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:00.597 03:00:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.597 03:00:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:00.597 03:00:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.597 03:00:55 -- common/autotest_common.sh@1210 -- # return 0 00:13:00.597 03:00:55 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.597 03:00:55 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 [2024-07-14 03:00:55.759796] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.597 03:00:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:00.597 03:00:55 -- common/autotest_common.sh@10 -- # set +x 00:13:00.597 03:00:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:00.597 03:00:55 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.537 03:00:56 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.537 03:00:56 -- common/autotest_common.sh@1177 -- # local i=0 00:13:01.537 03:00:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.537 03:00:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:01.537 03:00:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:03.443 03:00:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:03.443 03:00:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:03.443 03:00:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:03.443 03:00:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.443 03:00:58 -- common/autotest_common.sh@1187 -- # return 0 00:13:03.443 03:00:58 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.443 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.443 03:00:58 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.443 03:00:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:03.443 03:00:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:03.443 03:00:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@1210 -- # return 0 00:13:03.443 03:00:58 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@99 -- # seq 1 5 00:13:03.443 03:00:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.443 03:00:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 [2024-07-14 03:00:58.568522] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.443 03:00:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 [2024-07-14 03:00:58.616596] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.443 03:00:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 [2024-07-14 03:00:58.664763] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.443 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.443 03:00:58 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.443 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.443 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.702 03:00:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 [2024-07-14 03:00:58.712946] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:03.702 03:00:58 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.702 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.702 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.702 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.702 03:00:58 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 [2024-07-14 03:00:58.761109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:03.703 03:00:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:03.703 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:13:03.703 03:00:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:03.703 03:00:58 -- target/rpc.sh@110 -- # stats='{ 00:13:03.703 "tick_rate": 2700000000, 00:13:03.703 "poll_groups": [ 00:13:03.703 { 00:13:03.703 "name": "nvmf_tgt_poll_group_0", 00:13:03.703 "admin_qpairs": 2, 00:13:03.703 "io_qpairs": 84, 00:13:03.703 "current_admin_qpairs": 0, 00:13:03.703 "current_io_qpairs": 0, 00:13:03.703 "pending_bdev_io": 0, 00:13:03.703 "completed_nvme_io": 186, 00:13:03.703 "transports": [ 00:13:03.703 { 00:13:03.703 "trtype": "TCP" 00:13:03.703 } 00:13:03.703 ] 00:13:03.703 }, 00:13:03.703 { 00:13:03.703 "name": "nvmf_tgt_poll_group_1", 00:13:03.703 "admin_qpairs": 2, 00:13:03.703 "io_qpairs": 84, 00:13:03.703 "current_admin_qpairs": 0, 00:13:03.703 "current_io_qpairs": 0, 00:13:03.703 "pending_bdev_io": 0, 00:13:03.703 "completed_nvme_io": 184, 00:13:03.703 "transports": [ 00:13:03.703 { 00:13:03.703 "trtype": "TCP" 00:13:03.703 } 00:13:03.703 ] 00:13:03.703 }, 00:13:03.703 { 00:13:03.703 "name": "nvmf_tgt_poll_group_2", 00:13:03.703 "admin_qpairs": 1, 00:13:03.703 "io_qpairs": 84, 00:13:03.703 "current_admin_qpairs": 0, 00:13:03.703 "current_io_qpairs": 0, 00:13:03.703 "pending_bdev_io": 0, 00:13:03.703 "completed_nvme_io": 182, 00:13:03.703 "transports": [ 00:13:03.703 { 00:13:03.703 "trtype": "TCP" 00:13:03.703 } 00:13:03.703 ] 00:13:03.703 }, 00:13:03.703 { 00:13:03.703 "name": "nvmf_tgt_poll_group_3", 00:13:03.703 "admin_qpairs": 2, 00:13:03.703 "io_qpairs": 84, 00:13:03.703 "current_admin_qpairs": 0, 00:13:03.703 "current_io_qpairs": 0, 00:13:03.703 "pending_bdev_io": 0, 00:13:03.703 "completed_nvme_io": 134, 00:13:03.703 "transports": [ 00:13:03.703 { 00:13:03.703 "trtype": "TCP" 00:13:03.703 } 00:13:03.703 ] 00:13:03.703 } 00:13:03.703 ] 00:13:03.703 }' 00:13:03.703 03:00:58 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.703 03:00:58 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:03.703 03:00:58 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:03.703 03:00:58 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:03.703 03:00:58 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:03.703 03:00:58 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:03.703 03:00:58 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:03.703 03:00:58 -- target/rpc.sh@123 -- # nvmftestfini 00:13:03.703 03:00:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:03.703 03:00:58 -- nvmf/common.sh@116 -- # sync 00:13:03.703 03:00:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:03.703 03:00:58 -- nvmf/common.sh@119 -- # set +e 00:13:03.703 03:00:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:03.703 03:00:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:03.703 rmmod nvme_tcp 00:13:03.703 rmmod nvme_fabrics 00:13:03.703 rmmod nvme_keyring 00:13:03.703 03:00:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:03.963 03:00:58 -- nvmf/common.sh@123 -- # set -e 00:13:03.963 03:00:58 -- nvmf/common.sh@124 -- # return 0 00:13:03.963 03:00:58 -- nvmf/common.sh@477 -- # '[' -n 1947177 ']' 00:13:03.963 03:00:58 -- nvmf/common.sh@478 -- # killprocess 1947177 00:13:03.963 03:00:58 -- common/autotest_common.sh@926 -- # '[' -z 1947177 ']' 00:13:03.963 03:00:58 -- common/autotest_common.sh@930 -- # kill -0 1947177 00:13:03.963 03:00:58 -- common/autotest_common.sh@931 -- # uname 00:13:03.963 03:00:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:03.963 03:00:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1947177 00:13:03.963 03:00:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:03.963 03:00:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:03.963 03:00:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1947177' 00:13:03.963 killing process with pid 1947177 00:13:03.963 03:00:58 -- common/autotest_common.sh@945 -- # kill 1947177 00:13:03.963 03:00:58 -- common/autotest_common.sh@950 -- # wait 1947177 00:13:03.963 03:00:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:03.963 03:00:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:03.963 03:00:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:04.223 03:00:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.223 03:00:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:04.223 03:00:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.223 03:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.223 03:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.131 03:01:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:06.131 00:13:06.131 real 0m25.664s 00:13:06.131 user 1m24.415s 00:13:06.131 sys 0m4.136s 00:13:06.131 03:01:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.131 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:06.131 ************************************ 00:13:06.131 END TEST nvmf_rpc 00:13:06.131 ************************************ 00:13:06.131 03:01:01 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.131 03:01:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:06.131 03:01:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.131 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:06.131 ************************************ 00:13:06.131 START TEST nvmf_invalid 00:13:06.131 ************************************ 00:13:06.131 03:01:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:06.131 * Looking for test storage... 00:13:06.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:06.131 03:01:01 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:06.131 03:01:01 -- nvmf/common.sh@7 -- # uname -s 00:13:06.131 03:01:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:06.131 03:01:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:06.131 03:01:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:06.131 03:01:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:06.131 03:01:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:06.131 03:01:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:06.131 03:01:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:06.131 03:01:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:06.131 03:01:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:06.131 03:01:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:06.131 03:01:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.131 03:01:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:06.131 03:01:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:06.131 03:01:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:06.131 03:01:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:06.131 03:01:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:06.131 03:01:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:06.131 03:01:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:06.131 03:01:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:06.132 03:01:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.132 03:01:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.132 03:01:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.132 03:01:01 -- paths/export.sh@5 -- # export PATH 00:13:06.132 03:01:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:06.132 03:01:01 -- nvmf/common.sh@46 -- # : 0 00:13:06.132 03:01:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:06.132 03:01:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:06.132 03:01:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:06.132 03:01:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:06.132 03:01:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:06.132 03:01:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:06.132 03:01:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:06.132 03:01:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:06.132 03:01:01 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:06.132 03:01:01 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:06.132 03:01:01 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:06.132 03:01:01 -- target/invalid.sh@14 -- # target=foobar 00:13:06.132 03:01:01 -- target/invalid.sh@16 -- # RANDOM=0 00:13:06.132 03:01:01 -- target/invalid.sh@34 -- # nvmftestinit 00:13:06.132 03:01:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:06.132 03:01:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:06.132 03:01:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:06.132 03:01:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:06.132 03:01:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:06.132 03:01:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.132 03:01:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:06.132 03:01:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.132 03:01:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:06.132 03:01:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:06.132 03:01:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:06.132 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:13:08.666 03:01:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:08.666 03:01:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:08.666 03:01:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:08.666 03:01:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:08.666 03:01:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:08.666 03:01:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:08.666 03:01:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:08.666 03:01:03 -- nvmf/common.sh@294 -- # net_devs=() 00:13:08.666 03:01:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:08.666 03:01:03 -- nvmf/common.sh@295 -- # e810=() 00:13:08.666 03:01:03 -- nvmf/common.sh@295 -- # local -ga e810 00:13:08.666 03:01:03 -- nvmf/common.sh@296 -- # x722=() 00:13:08.666 03:01:03 -- nvmf/common.sh@296 -- # local -ga x722 00:13:08.666 03:01:03 -- nvmf/common.sh@297 -- # mlx=() 00:13:08.666 03:01:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:08.666 03:01:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.666 03:01:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:08.666 03:01:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:08.666 03:01:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:08.666 03:01:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:08.666 03:01:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:08.666 03:01:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:08.666 03:01:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.666 03:01:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.666 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.667 03:01:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.667 03:01:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.667 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.667 03:01:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:08.667 03:01:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.667 03:01:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.667 03:01:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.667 03:01:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.667 03:01:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.667 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.667 03:01:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.667 03:01:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.667 03:01:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.667 03:01:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.667 03:01:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.667 03:01:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.667 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.667 03:01:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.667 03:01:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:08.667 03:01:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:08.667 03:01:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:08.667 03:01:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.667 03:01:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.667 03:01:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.667 03:01:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:08.667 03:01:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.667 03:01:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.667 03:01:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:08.667 03:01:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.667 03:01:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.667 03:01:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:08.667 03:01:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:08.667 03:01:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.667 03:01:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.667 03:01:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.667 03:01:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.667 03:01:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:08.667 03:01:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.667 03:01:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.667 03:01:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.667 03:01:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:08.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:13:08.667 00:13:08.667 --- 10.0.0.2 ping statistics --- 00:13:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.667 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:08.667 03:01:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:13:08.667 00:13:08.667 --- 10.0.0.1 ping statistics --- 00:13:08.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.667 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:08.667 03:01:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.667 03:01:03 -- nvmf/common.sh@410 -- # return 0 00:13:08.667 03:01:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:08.667 03:01:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.667 03:01:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:08.667 03:01:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.667 03:01:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:08.667 03:01:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:08.667 03:01:03 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:08.667 03:01:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:08.667 03:01:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:08.667 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:13:08.667 03:01:03 -- nvmf/common.sh@469 -- # nvmfpid=1951891 00:13:08.667 03:01:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.667 03:01:03 -- nvmf/common.sh@470 -- # waitforlisten 1951891 00:13:08.667 03:01:03 -- common/autotest_common.sh@819 -- # '[' -z 1951891 ']' 00:13:08.667 03:01:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.667 03:01:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:08.667 03:01:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.667 03:01:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:08.667 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:13:08.667 [2024-07-14 03:01:03.560743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:08.667 [2024-07-14 03:01:03.560831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.667 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.667 [2024-07-14 03:01:03.629816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.667 [2024-07-14 03:01:03.717841] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:08.667 [2024-07-14 03:01:03.718012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.667 [2024-07-14 03:01:03.718040] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.667 [2024-07-14 03:01:03.718060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.667 [2024-07-14 03:01:03.718139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.667 [2024-07-14 03:01:03.718164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.667 [2024-07-14 03:01:03.718230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:08.667 [2024-07-14 03:01:03.718233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.602 03:01:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:09.602 03:01:04 -- common/autotest_common.sh@852 -- # return 0 00:13:09.602 03:01:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:09.602 03:01:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:09.602 03:01:04 -- common/autotest_common.sh@10 -- # set +x 00:13:09.602 03:01:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.602 03:01:04 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.602 03:01:04 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15262 00:13:09.602 [2024-07-14 03:01:04.776123] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:09.602 03:01:04 -- target/invalid.sh@40 -- # out='request: 00:13:09.602 { 00:13:09.602 "nqn": "nqn.2016-06.io.spdk:cnode15262", 00:13:09.602 "tgt_name": "foobar", 00:13:09.602 "method": "nvmf_create_subsystem", 00:13:09.602 "req_id": 1 00:13:09.602 } 00:13:09.602 Got JSON-RPC error response 00:13:09.602 response: 00:13:09.602 { 00:13:09.602 "code": -32603, 00:13:09.602 "message": "Unable to find target foobar" 00:13:09.602 }' 00:13:09.602 03:01:04 -- target/invalid.sh@41 -- # [[ request: 00:13:09.602 { 00:13:09.602 "nqn": "nqn.2016-06.io.spdk:cnode15262", 00:13:09.602 "tgt_name": "foobar", 00:13:09.602 "method": "nvmf_create_subsystem", 00:13:09.602 "req_id": 1 00:13:09.602 } 00:13:09.602 Got JSON-RPC error response 00:13:09.602 response: 00:13:09.602 { 00:13:09.602 "code": -32603, 00:13:09.602 "message": "Unable to find target foobar" 00:13:09.602 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:09.602 03:01:04 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:09.602 03:01:04 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3227 00:13:09.859 [2024-07-14 03:01:05.008913] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3227: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:09.859 03:01:05 -- target/invalid.sh@45 -- # out='request: 00:13:09.859 { 00:13:09.859 "nqn": "nqn.2016-06.io.spdk:cnode3227", 00:13:09.859 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.859 "method": "nvmf_create_subsystem", 00:13:09.859 "req_id": 1 00:13:09.859 } 00:13:09.859 Got JSON-RPC error response 00:13:09.859 response: 00:13:09.859 { 00:13:09.859 "code": -32602, 00:13:09.859 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.859 }' 00:13:09.859 03:01:05 -- target/invalid.sh@46 -- # [[ request: 00:13:09.859 { 00:13:09.859 "nqn": "nqn.2016-06.io.spdk:cnode3227", 00:13:09.859 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:09.859 "method": "nvmf_create_subsystem", 00:13:09.859 "req_id": 1 00:13:09.859 } 00:13:09.859 Got JSON-RPC error response 00:13:09.859 response: 00:13:09.859 { 00:13:09.859 "code": -32602, 00:13:09.859 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:09.859 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:09.859 03:01:05 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:09.859 03:01:05 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1856 00:13:10.117 [2024-07-14 03:01:05.245641] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1856: invalid model number 'SPDK_Controller' 00:13:10.117 03:01:05 -- target/invalid.sh@50 -- # out='request: 00:13:10.117 { 00:13:10.117 "nqn": "nqn.2016-06.io.spdk:cnode1856", 00:13:10.117 "model_number": "SPDK_Controller\u001f", 00:13:10.117 "method": "nvmf_create_subsystem", 00:13:10.117 "req_id": 1 00:13:10.117 } 00:13:10.117 Got JSON-RPC error response 00:13:10.117 response: 00:13:10.117 { 00:13:10.117 "code": -32602, 00:13:10.117 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.117 }' 00:13:10.117 03:01:05 -- target/invalid.sh@51 -- # [[ request: 00:13:10.117 { 00:13:10.117 "nqn": "nqn.2016-06.io.spdk:cnode1856", 00:13:10.117 "model_number": "SPDK_Controller\u001f", 00:13:10.117 "method": "nvmf_create_subsystem", 00:13:10.117 "req_id": 1 00:13:10.117 } 00:13:10.117 Got JSON-RPC error response 00:13:10.117 response: 00:13:10.117 { 00:13:10.117 "code": -32602, 00:13:10.117 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.117 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.117 03:01:05 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:10.117 03:01:05 -- target/invalid.sh@19 -- # local length=21 ll 00:13:10.117 03:01:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.117 03:01:05 -- target/invalid.sh@21 -- # local chars 00:13:10.117 03:01:05 -- target/invalid.sh@22 -- # local string 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 81 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=Q 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 112 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=p 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 124 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+='|' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 107 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=k 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 93 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=']' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 52 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=4 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 34 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+='"' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 59 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=';' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 38 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+='&' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 42 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+='*' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 119 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=w 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 89 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x59' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=Y 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 35 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+='#' 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 75 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=K 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 110 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=n 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 112 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # string+=p 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.117 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.117 03:01:05 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # string+='\' 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # printf %x 124 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # string+='|' 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # printf %x 90 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # string+=Z 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # printf %x 55 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x37' 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # string+=7 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # printf %x 54 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:10.118 03:01:05 -- target/invalid.sh@25 -- # string+=6 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.118 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.118 03:01:05 -- target/invalid.sh@28 -- # [[ Q == \- ]] 00:13:10.118 03:01:05 -- target/invalid.sh@31 -- # echo 'Qp|k]4";&*wY#Knp\|Z76' 00:13:10.118 03:01:05 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Qp|k]4";&*wY#Knp\|Z76' nqn.2016-06.io.spdk:cnode15354 00:13:10.376 [2024-07-14 03:01:05.578752] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15354: invalid serial number 'Qp|k]4";&*wY#Knp\|Z76' 00:13:10.376 03:01:05 -- target/invalid.sh@54 -- # out='request: 00:13:10.376 { 00:13:10.376 "nqn": "nqn.2016-06.io.spdk:cnode15354", 00:13:10.376 "serial_number": "Qp|k]4\";&*wY#Knp\\|Z76", 00:13:10.376 "method": "nvmf_create_subsystem", 00:13:10.376 "req_id": 1 00:13:10.376 } 00:13:10.376 Got JSON-RPC error response 00:13:10.376 response: 00:13:10.376 { 00:13:10.376 "code": -32602, 00:13:10.376 "message": "Invalid SN Qp|k]4\";&*wY#Knp\\|Z76" 00:13:10.376 }' 00:13:10.376 03:01:05 -- target/invalid.sh@55 -- # [[ request: 00:13:10.376 { 00:13:10.376 "nqn": "nqn.2016-06.io.spdk:cnode15354", 00:13:10.376 "serial_number": "Qp|k]4\";&*wY#Knp\\|Z76", 00:13:10.376 "method": "nvmf_create_subsystem", 00:13:10.376 "req_id": 1 00:13:10.376 } 00:13:10.376 Got JSON-RPC error response 00:13:10.376 response: 00:13:10.376 { 00:13:10.376 "code": -32602, 00:13:10.376 "message": "Invalid SN Qp|k]4\";&*wY#Knp\\|Z76" 00:13:10.376 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.376 03:01:05 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:10.376 03:01:05 -- target/invalid.sh@19 -- # local length=41 ll 00:13:10.376 03:01:05 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.376 03:01:05 -- target/invalid.sh@21 -- # local chars 00:13:10.376 03:01:05 -- target/invalid.sh@22 -- # local string 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # printf %x 37 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # string+=% 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # printf %x 40 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # string+='(' 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # printf %x 54 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # string+=6 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # printf %x 86 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # string+=V 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.376 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # printf %x 74 00:13:10.376 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # string+=J 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # printf %x 47 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # string+=/ 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # printf %x 84 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # string+=T 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.377 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.377 03:01:05 -- target/invalid.sh@25 -- # printf %x 86 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=V 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 116 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=t 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 112 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=p 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 73 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=I 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 111 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=o 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 51 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=3 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 49 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=1 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 86 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=V 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 32 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=' ' 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 67 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=C 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 106 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=j 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+='\' 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 116 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=t 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 77 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=M 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 111 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=o 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 86 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=V 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 43 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=+ 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 114 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=r 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 101 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=e 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 47 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=/ 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 86 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=V 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 54 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=6 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 123 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+='{' 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 47 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=/ 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 62 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+='>' 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 51 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=3 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 100 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=d 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 120 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=x 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 73 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=I 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 92 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+='\' 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 117 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=u 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 88 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=X 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 115 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=s 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # printf %x 71 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:10.635 03:01:05 -- target/invalid.sh@25 -- # string+=G 00:13:10.635 03:01:05 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.636 03:01:05 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.636 03:01:05 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:13:10.636 03:01:05 -- target/invalid.sh@31 -- # echo '%(6VJ/TVtpIo31V Cj\tMoV+re/V6{/>3dxI\uXsG' 00:13:10.636 03:01:05 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '%(6VJ/TVtpIo31V Cj\tMoV+re/V6{/>3dxI\uXsG' nqn.2016-06.io.spdk:cnode32233 00:13:10.893 [2024-07-14 03:01:05.951995] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32233: invalid model number '%(6VJ/TVtpIo31V Cj\tMoV+re/V6{/>3dxI\uXsG' 00:13:10.893 03:01:05 -- target/invalid.sh@58 -- # out='request: 00:13:10.893 { 00:13:10.893 "nqn": "nqn.2016-06.io.spdk:cnode32233", 00:13:10.893 "model_number": "%(6VJ/TVtpIo31V Cj\\tMoV+re/V6{/>3dxI\\uXsG", 00:13:10.893 "method": "nvmf_create_subsystem", 00:13:10.893 "req_id": 1 00:13:10.893 } 00:13:10.893 Got JSON-RPC error response 00:13:10.893 response: 00:13:10.893 { 00:13:10.893 "code": -32602, 00:13:10.893 "message": "Invalid MN %(6VJ/TVtpIo31V Cj\\tMoV+re/V6{/>3dxI\\uXsG" 00:13:10.893 }' 00:13:10.893 03:01:05 -- target/invalid.sh@59 -- # [[ request: 00:13:10.893 { 00:13:10.893 "nqn": "nqn.2016-06.io.spdk:cnode32233", 00:13:10.893 "model_number": "%(6VJ/TVtpIo31V Cj\\tMoV+re/V6{/>3dxI\\uXsG", 00:13:10.893 "method": "nvmf_create_subsystem", 00:13:10.893 "req_id": 1 00:13:10.893 } 00:13:10.893 Got JSON-RPC error response 00:13:10.893 response: 00:13:10.893 { 00:13:10.893 "code": -32602, 00:13:10.893 "message": "Invalid MN %(6VJ/TVtpIo31V Cj\\tMoV+re/V6{/>3dxI\\uXsG" 00:13:10.893 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.893 03:01:05 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:11.151 [2024-07-14 03:01:06.176812] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.151 03:01:06 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:11.440 03:01:06 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:11.440 03:01:06 -- target/invalid.sh@67 -- # echo '' 00:13:11.440 03:01:06 -- target/invalid.sh@67 -- # head -n 1 00:13:11.440 03:01:06 -- target/invalid.sh@67 -- # IP= 00:13:11.440 03:01:06 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:11.699 [2024-07-14 03:01:06.678541] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:11.699 03:01:06 -- target/invalid.sh@69 -- # out='request: 00:13:11.699 { 00:13:11.699 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.699 "listen_address": { 00:13:11.699 "trtype": "tcp", 00:13:11.699 "traddr": "", 00:13:11.699 "trsvcid": "4421" 00:13:11.699 }, 00:13:11.699 "method": "nvmf_subsystem_remove_listener", 00:13:11.699 "req_id": 1 00:13:11.699 } 00:13:11.699 Got JSON-RPC error response 00:13:11.699 response: 00:13:11.699 { 00:13:11.699 "code": -32602, 00:13:11.699 "message": "Invalid parameters" 00:13:11.699 }' 00:13:11.699 03:01:06 -- target/invalid.sh@70 -- # [[ request: 00:13:11.699 { 00:13:11.699 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.699 "listen_address": { 00:13:11.699 "trtype": "tcp", 00:13:11.699 "traddr": "", 00:13:11.699 "trsvcid": "4421" 00:13:11.699 }, 00:13:11.699 "method": "nvmf_subsystem_remove_listener", 00:13:11.699 "req_id": 1 00:13:11.699 } 00:13:11.699 Got JSON-RPC error response 00:13:11.699 response: 00:13:11.699 { 00:13:11.699 "code": -32602, 00:13:11.699 "message": "Invalid parameters" 00:13:11.699 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:11.699 03:01:06 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17372 -i 0 00:13:11.699 [2024-07-14 03:01:06.923364] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17372: invalid cntlid range [0-65519] 00:13:11.699 03:01:06 -- target/invalid.sh@73 -- # out='request: 00:13:11.699 { 00:13:11.699 "nqn": "nqn.2016-06.io.spdk:cnode17372", 00:13:11.699 "min_cntlid": 0, 00:13:11.699 "method": "nvmf_create_subsystem", 00:13:11.699 "req_id": 1 00:13:11.699 } 00:13:11.699 Got JSON-RPC error response 00:13:11.699 response: 00:13:11.699 { 00:13:11.699 "code": -32602, 00:13:11.699 "message": "Invalid cntlid range [0-65519]" 00:13:11.699 }' 00:13:11.699 03:01:06 -- target/invalid.sh@74 -- # [[ request: 00:13:11.699 { 00:13:11.699 "nqn": "nqn.2016-06.io.spdk:cnode17372", 00:13:11.699 "min_cntlid": 0, 00:13:11.699 "method": "nvmf_create_subsystem", 00:13:11.699 "req_id": 1 00:13:11.699 } 00:13:11.699 Got JSON-RPC error response 00:13:11.699 response: 00:13:11.699 { 00:13:11.699 "code": -32602, 00:13:11.699 "message": "Invalid cntlid range [0-65519]" 00:13:11.699 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.699 03:01:06 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7749 -i 65520 00:13:11.956 [2024-07-14 03:01:07.164123] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7749: invalid cntlid range [65520-65519] 00:13:11.956 03:01:07 -- target/invalid.sh@75 -- # out='request: 00:13:11.956 { 00:13:11.956 "nqn": "nqn.2016-06.io.spdk:cnode7749", 00:13:11.956 "min_cntlid": 65520, 00:13:11.956 "method": "nvmf_create_subsystem", 00:13:11.956 "req_id": 1 00:13:11.956 } 00:13:11.956 Got JSON-RPC error response 00:13:11.956 response: 00:13:11.956 { 00:13:11.956 "code": -32602, 00:13:11.956 "message": "Invalid cntlid range [65520-65519]" 00:13:11.956 }' 00:13:11.956 03:01:07 -- target/invalid.sh@76 -- # [[ request: 00:13:11.956 { 00:13:11.956 "nqn": "nqn.2016-06.io.spdk:cnode7749", 00:13:11.956 "min_cntlid": 65520, 00:13:11.956 "method": "nvmf_create_subsystem", 00:13:11.956 "req_id": 1 00:13:11.956 } 00:13:11.956 Got JSON-RPC error response 00:13:11.956 response: 00:13:11.957 { 00:13:11.957 "code": -32602, 00:13:11.957 "message": "Invalid cntlid range [65520-65519]" 00:13:11.957 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.957 03:01:07 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23295 -I 0 00:13:12.214 [2024-07-14 03:01:07.396929] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23295: invalid cntlid range [1-0] 00:13:12.214 03:01:07 -- target/invalid.sh@77 -- # out='request: 00:13:12.214 { 00:13:12.214 "nqn": "nqn.2016-06.io.spdk:cnode23295", 00:13:12.214 "max_cntlid": 0, 00:13:12.214 "method": "nvmf_create_subsystem", 00:13:12.214 "req_id": 1 00:13:12.214 } 00:13:12.214 Got JSON-RPC error response 00:13:12.214 response: 00:13:12.214 { 00:13:12.214 "code": -32602, 00:13:12.214 "message": "Invalid cntlid range [1-0]" 00:13:12.214 }' 00:13:12.214 03:01:07 -- target/invalid.sh@78 -- # [[ request: 00:13:12.214 { 00:13:12.214 "nqn": "nqn.2016-06.io.spdk:cnode23295", 00:13:12.214 "max_cntlid": 0, 00:13:12.214 "method": "nvmf_create_subsystem", 00:13:12.214 "req_id": 1 00:13:12.214 } 00:13:12.214 Got JSON-RPC error response 00:13:12.214 response: 00:13:12.214 { 00:13:12.214 "code": -32602, 00:13:12.214 "message": "Invalid cntlid range [1-0]" 00:13:12.214 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.214 03:01:07 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9003 -I 65520 00:13:12.472 [2024-07-14 03:01:07.633743] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9003: invalid cntlid range [1-65520] 00:13:12.472 03:01:07 -- target/invalid.sh@79 -- # out='request: 00:13:12.472 { 00:13:12.472 "nqn": "nqn.2016-06.io.spdk:cnode9003", 00:13:12.472 "max_cntlid": 65520, 00:13:12.472 "method": "nvmf_create_subsystem", 00:13:12.472 "req_id": 1 00:13:12.472 } 00:13:12.472 Got JSON-RPC error response 00:13:12.472 response: 00:13:12.472 { 00:13:12.472 "code": -32602, 00:13:12.472 "message": "Invalid cntlid range [1-65520]" 00:13:12.472 }' 00:13:12.472 03:01:07 -- target/invalid.sh@80 -- # [[ request: 00:13:12.472 { 00:13:12.472 "nqn": "nqn.2016-06.io.spdk:cnode9003", 00:13:12.472 "max_cntlid": 65520, 00:13:12.472 "method": "nvmf_create_subsystem", 00:13:12.472 "req_id": 1 00:13:12.472 } 00:13:12.472 Got JSON-RPC error response 00:13:12.472 response: 00:13:12.472 { 00:13:12.472 "code": -32602, 00:13:12.472 "message": "Invalid cntlid range [1-65520]" 00:13:12.472 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.472 03:01:07 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13614 -i 6 -I 5 00:13:12.730 [2024-07-14 03:01:07.878591] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13614: invalid cntlid range [6-5] 00:13:12.730 03:01:07 -- target/invalid.sh@83 -- # out='request: 00:13:12.730 { 00:13:12.730 "nqn": "nqn.2016-06.io.spdk:cnode13614", 00:13:12.730 "min_cntlid": 6, 00:13:12.730 "max_cntlid": 5, 00:13:12.730 "method": "nvmf_create_subsystem", 00:13:12.730 "req_id": 1 00:13:12.730 } 00:13:12.730 Got JSON-RPC error response 00:13:12.730 response: 00:13:12.730 { 00:13:12.730 "code": -32602, 00:13:12.730 "message": "Invalid cntlid range [6-5]" 00:13:12.730 }' 00:13:12.730 03:01:07 -- target/invalid.sh@84 -- # [[ request: 00:13:12.730 { 00:13:12.730 "nqn": "nqn.2016-06.io.spdk:cnode13614", 00:13:12.730 "min_cntlid": 6, 00:13:12.730 "max_cntlid": 5, 00:13:12.730 "method": "nvmf_create_subsystem", 00:13:12.730 "req_id": 1 00:13:12.730 } 00:13:12.730 Got JSON-RPC error response 00:13:12.730 response: 00:13:12.730 { 00:13:12.730 "code": -32602, 00:13:12.730 "message": "Invalid cntlid range [6-5]" 00:13:12.730 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.730 03:01:07 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:12.989 03:01:08 -- target/invalid.sh@87 -- # out='request: 00:13:12.989 { 00:13:12.989 "name": "foobar", 00:13:12.989 "method": "nvmf_delete_target", 00:13:12.989 "req_id": 1 00:13:12.989 } 00:13:12.989 Got JSON-RPC error response 00:13:12.989 response: 00:13:12.989 { 00:13:12.989 "code": -32602, 00:13:12.989 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:12.989 }' 00:13:12.989 03:01:08 -- target/invalid.sh@88 -- # [[ request: 00:13:12.989 { 00:13:12.989 "name": "foobar", 00:13:12.989 "method": "nvmf_delete_target", 00:13:12.989 "req_id": 1 00:13:12.989 } 00:13:12.989 Got JSON-RPC error response 00:13:12.989 response: 00:13:12.989 { 00:13:12.989 "code": -32602, 00:13:12.989 "message": "The specified target doesn't exist, cannot delete it." 00:13:12.989 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:12.989 03:01:08 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:12.989 03:01:08 -- target/invalid.sh@91 -- # nvmftestfini 00:13:12.989 03:01:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:12.989 03:01:08 -- nvmf/common.sh@116 -- # sync 00:13:12.989 03:01:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:12.989 03:01:08 -- nvmf/common.sh@119 -- # set +e 00:13:12.989 03:01:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:12.989 03:01:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:12.989 rmmod nvme_tcp 00:13:12.989 rmmod nvme_fabrics 00:13:12.989 rmmod nvme_keyring 00:13:12.989 03:01:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:12.989 03:01:08 -- nvmf/common.sh@123 -- # set -e 00:13:12.989 03:01:08 -- nvmf/common.sh@124 -- # return 0 00:13:12.989 03:01:08 -- nvmf/common.sh@477 -- # '[' -n 1951891 ']' 00:13:12.989 03:01:08 -- nvmf/common.sh@478 -- # killprocess 1951891 00:13:12.989 03:01:08 -- common/autotest_common.sh@926 -- # '[' -z 1951891 ']' 00:13:12.989 03:01:08 -- common/autotest_common.sh@930 -- # kill -0 1951891 00:13:12.989 03:01:08 -- common/autotest_common.sh@931 -- # uname 00:13:12.989 03:01:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:12.989 03:01:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1951891 00:13:12.989 03:01:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:12.989 03:01:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:12.989 03:01:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1951891' 00:13:12.989 killing process with pid 1951891 00:13:12.989 03:01:08 -- common/autotest_common.sh@945 -- # kill 1951891 00:13:12.989 03:01:08 -- common/autotest_common.sh@950 -- # wait 1951891 00:13:13.247 03:01:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:13.247 03:01:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:13.247 03:01:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:13.247 03:01:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.247 03:01:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:13.247 03:01:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.247 03:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.247 03:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.150 03:01:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:15.150 00:13:15.150 real 0m9.100s 00:13:15.150 user 0m22.130s 00:13:15.150 sys 0m2.413s 00:13:15.150 03:01:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.150 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:13:15.150 ************************************ 00:13:15.150 END TEST nvmf_invalid 00:13:15.150 ************************************ 00:13:15.408 03:01:10 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.408 03:01:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:15.408 03:01:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.408 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:13:15.408 ************************************ 00:13:15.408 START TEST nvmf_abort 00:13:15.408 ************************************ 00:13:15.408 03:01:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:15.408 * Looking for test storage... 00:13:15.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.408 03:01:10 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.408 03:01:10 -- nvmf/common.sh@7 -- # uname -s 00:13:15.408 03:01:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.408 03:01:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.408 03:01:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.408 03:01:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.408 03:01:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.408 03:01:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.408 03:01:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.408 03:01:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.408 03:01:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.408 03:01:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.408 03:01:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.408 03:01:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:15.408 03:01:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.408 03:01:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.408 03:01:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.408 03:01:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.408 03:01:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.408 03:01:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.408 03:01:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.408 03:01:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.408 03:01:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.408 03:01:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.408 03:01:10 -- paths/export.sh@5 -- # export PATH 00:13:15.408 03:01:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.408 03:01:10 -- nvmf/common.sh@46 -- # : 0 00:13:15.408 03:01:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:15.408 03:01:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:15.408 03:01:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:15.408 03:01:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.408 03:01:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.408 03:01:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:15.408 03:01:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:15.408 03:01:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:15.408 03:01:10 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:15.408 03:01:10 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:15.408 03:01:10 -- target/abort.sh@14 -- # nvmftestinit 00:13:15.408 03:01:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:15.408 03:01:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.408 03:01:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:15.408 03:01:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:15.408 03:01:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:15.408 03:01:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.408 03:01:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:15.408 03:01:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.408 03:01:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:15.408 03:01:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:15.408 03:01:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:15.408 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:13:17.308 03:01:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:17.308 03:01:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:17.308 03:01:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:17.308 03:01:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:17.308 03:01:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:17.308 03:01:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:17.308 03:01:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:17.308 03:01:12 -- nvmf/common.sh@294 -- # net_devs=() 00:13:17.308 03:01:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:17.308 03:01:12 -- nvmf/common.sh@295 -- # e810=() 00:13:17.308 03:01:12 -- nvmf/common.sh@295 -- # local -ga e810 00:13:17.308 03:01:12 -- nvmf/common.sh@296 -- # x722=() 00:13:17.308 03:01:12 -- nvmf/common.sh@296 -- # local -ga x722 00:13:17.308 03:01:12 -- nvmf/common.sh@297 -- # mlx=() 00:13:17.308 03:01:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:17.308 03:01:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:17.308 03:01:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:17.308 03:01:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:17.308 03:01:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.308 03:01:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:17.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:17.308 03:01:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:17.308 03:01:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:17.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:17.308 03:01:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.308 03:01:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.308 03:01:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.308 03:01:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:17.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:17.308 03:01:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.308 03:01:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:17.308 03:01:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:17.308 03:01:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:17.308 03:01:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:17.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:17.308 03:01:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:17.308 03:01:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:17.308 03:01:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:17.308 03:01:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:17.308 03:01:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:17.308 03:01:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:17.308 03:01:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:17.308 03:01:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:17.308 03:01:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:17.308 03:01:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:17.308 03:01:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:17.308 03:01:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:17.308 03:01:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:17.308 03:01:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:17.308 03:01:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:17.308 03:01:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:17.308 03:01:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:17.566 03:01:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:17.566 03:01:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:17.566 03:01:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:17.566 03:01:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:17.566 03:01:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:17.566 03:01:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:17.566 03:01:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:17.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:17.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:13:17.566 00:13:17.566 --- 10.0.0.2 ping statistics --- 00:13:17.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.566 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:13:17.566 03:01:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:17.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:17.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:13:17.566 00:13:17.566 --- 10.0.0.1 ping statistics --- 00:13:17.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:17.566 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:13:17.566 03:01:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:17.566 03:01:12 -- nvmf/common.sh@410 -- # return 0 00:13:17.566 03:01:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:17.566 03:01:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:17.566 03:01:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:17.566 03:01:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:17.566 03:01:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:17.566 03:01:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:17.566 03:01:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:17.566 03:01:12 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:17.566 03:01:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:17.566 03:01:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:17.566 03:01:12 -- common/autotest_common.sh@10 -- # set +x 00:13:17.566 03:01:12 -- nvmf/common.sh@469 -- # nvmfpid=1954564 00:13:17.566 03:01:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:17.566 03:01:12 -- nvmf/common.sh@470 -- # waitforlisten 1954564 00:13:17.566 03:01:12 -- common/autotest_common.sh@819 -- # '[' -z 1954564 ']' 00:13:17.566 03:01:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.566 03:01:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:17.566 03:01:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.566 03:01:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:17.566 03:01:12 -- common/autotest_common.sh@10 -- # set +x 00:13:17.566 [2024-07-14 03:01:12.746643] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:17.566 [2024-07-14 03:01:12.746723] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.566 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.566 [2024-07-14 03:01:12.809229] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:17.823 [2024-07-14 03:01:12.892753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:17.823 [2024-07-14 03:01:12.892928] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:17.823 [2024-07-14 03:01:12.892948] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:17.823 [2024-07-14 03:01:12.892961] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:17.823 [2024-07-14 03:01:12.893040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:17.823 [2024-07-14 03:01:12.893093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:17.824 [2024-07-14 03:01:12.893096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.754 03:01:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:18.754 03:01:13 -- common/autotest_common.sh@852 -- # return 0 00:13:18.754 03:01:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:18.754 03:01:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 03:01:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:18.754 03:01:13 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 [2024-07-14 03:01:13.736070] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 Malloc0 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 Delay0 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 [2024-07-14 03:01:13.807744] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:18.754 03:01:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.754 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:13:18.754 03:01:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.754 03:01:13 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:18.754 EAL: No free 2048 kB hugepages reported on node 1 00:13:18.754 [2024-07-14 03:01:13.915290] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:21.276 Initializing NVMe Controllers 00:13:21.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:21.276 controller IO queue size 128 less than required 00:13:21.276 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:21.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:21.276 Initialization complete. Launching workers. 00:13:21.276 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 31368 00:13:21.276 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31433, failed to submit 62 00:13:21.276 success 31368, unsuccess 65, failed 0 00:13:21.276 03:01:15 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:21.276 03:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.276 03:01:15 -- common/autotest_common.sh@10 -- # set +x 00:13:21.276 03:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.276 03:01:16 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:21.276 03:01:16 -- target/abort.sh@38 -- # nvmftestfini 00:13:21.276 03:01:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:21.276 03:01:16 -- nvmf/common.sh@116 -- # sync 00:13:21.276 03:01:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:21.276 03:01:16 -- nvmf/common.sh@119 -- # set +e 00:13:21.276 03:01:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:21.276 03:01:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:21.276 rmmod nvme_tcp 00:13:21.276 rmmod nvme_fabrics 00:13:21.276 rmmod nvme_keyring 00:13:21.276 03:01:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:21.276 03:01:16 -- nvmf/common.sh@123 -- # set -e 00:13:21.276 03:01:16 -- nvmf/common.sh@124 -- # return 0 00:13:21.276 03:01:16 -- nvmf/common.sh@477 -- # '[' -n 1954564 ']' 00:13:21.276 03:01:16 -- nvmf/common.sh@478 -- # killprocess 1954564 00:13:21.276 03:01:16 -- common/autotest_common.sh@926 -- # '[' -z 1954564 ']' 00:13:21.276 03:01:16 -- common/autotest_common.sh@930 -- # kill -0 1954564 00:13:21.276 03:01:16 -- common/autotest_common.sh@931 -- # uname 00:13:21.276 03:01:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:21.276 03:01:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1954564 00:13:21.276 03:01:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:21.276 03:01:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:21.276 03:01:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1954564' 00:13:21.276 killing process with pid 1954564 00:13:21.276 03:01:16 -- common/autotest_common.sh@945 -- # kill 1954564 00:13:21.276 03:01:16 -- common/autotest_common.sh@950 -- # wait 1954564 00:13:21.276 03:01:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:21.276 03:01:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:21.276 03:01:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:21.276 03:01:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:21.276 03:01:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:21.276 03:01:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:21.276 03:01:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:21.276 03:01:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.175 03:01:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:23.175 00:13:23.175 real 0m7.992s 00:13:23.175 user 0m12.836s 00:13:23.175 sys 0m2.549s 00:13:23.175 03:01:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:23.175 03:01:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.175 ************************************ 00:13:23.175 END TEST nvmf_abort 00:13:23.175 ************************************ 00:13:23.433 03:01:18 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:23.433 03:01:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:23.433 03:01:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:23.433 03:01:18 -- common/autotest_common.sh@10 -- # set +x 00:13:23.433 ************************************ 00:13:23.433 START TEST nvmf_ns_hotplug_stress 00:13:23.433 ************************************ 00:13:23.433 03:01:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:23.433 * Looking for test storage... 00:13:23.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:23.433 03:01:18 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.433 03:01:18 -- nvmf/common.sh@7 -- # uname -s 00:13:23.433 03:01:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.433 03:01:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.433 03:01:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.433 03:01:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.433 03:01:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.433 03:01:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.433 03:01:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.433 03:01:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.433 03:01:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.433 03:01:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.433 03:01:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.433 03:01:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:23.433 03:01:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.433 03:01:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.433 03:01:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:23.433 03:01:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:23.433 03:01:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.433 03:01:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.433 03:01:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.434 03:01:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.434 03:01:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.434 03:01:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.434 03:01:18 -- paths/export.sh@5 -- # export PATH 00:13:23.434 03:01:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.434 03:01:18 -- nvmf/common.sh@46 -- # : 0 00:13:23.434 03:01:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:23.434 03:01:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:23.434 03:01:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:23.434 03:01:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.434 03:01:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.434 03:01:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:23.434 03:01:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:23.434 03:01:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:23.434 03:01:18 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:23.434 03:01:18 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:23.434 03:01:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:23.434 03:01:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.434 03:01:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:23.434 03:01:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:23.434 03:01:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:23.434 03:01:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.434 03:01:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.434 03:01:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.434 03:01:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:23.434 03:01:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:23.434 03:01:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:23.434 03:01:18 -- common/autotest_common.sh@10 -- # set +x 00:13:25.333 03:01:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.333 03:01:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:25.333 03:01:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:25.333 03:01:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:25.333 03:01:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:25.333 03:01:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:25.333 03:01:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:25.333 03:01:20 -- nvmf/common.sh@294 -- # net_devs=() 00:13:25.333 03:01:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:25.333 03:01:20 -- nvmf/common.sh@295 -- # e810=() 00:13:25.333 03:01:20 -- nvmf/common.sh@295 -- # local -ga e810 00:13:25.333 03:01:20 -- nvmf/common.sh@296 -- # x722=() 00:13:25.333 03:01:20 -- nvmf/common.sh@296 -- # local -ga x722 00:13:25.333 03:01:20 -- nvmf/common.sh@297 -- # mlx=() 00:13:25.333 03:01:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:25.333 03:01:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.333 03:01:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.334 03:01:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:25.334 03:01:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:25.334 03:01:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.334 03:01:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:25.334 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:25.334 03:01:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:25.334 03:01:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:25.334 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:25.334 03:01:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.334 03:01:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.334 03:01:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.334 03:01:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:25.334 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:25.334 03:01:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.334 03:01:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:25.334 03:01:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.334 03:01:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.334 03:01:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:25.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:25.334 03:01:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.334 03:01:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:25.334 03:01:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:25.334 03:01:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:25.334 03:01:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.334 03:01:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.334 03:01:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.334 03:01:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:25.334 03:01:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.334 03:01:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.334 03:01:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:25.334 03:01:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.334 03:01:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.334 03:01:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:25.334 03:01:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:25.334 03:01:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.334 03:01:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.592 03:01:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.592 03:01:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.592 03:01:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:25.592 03:01:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.592 03:01:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.592 03:01:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.592 03:01:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:25.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:13:25.592 00:13:25.592 --- 10.0.0.2 ping statistics --- 00:13:25.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.592 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:13:25.592 03:01:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:25.592 00:13:25.592 --- 10.0.0.1 ping statistics --- 00:13:25.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.592 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:25.592 03:01:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.592 03:01:20 -- nvmf/common.sh@410 -- # return 0 00:13:25.592 03:01:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:25.592 03:01:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.592 03:01:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:25.592 03:01:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:25.592 03:01:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.592 03:01:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:25.592 03:01:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:25.592 03:01:20 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:25.592 03:01:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:25.592 03:01:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:25.592 03:01:20 -- common/autotest_common.sh@10 -- # set +x 00:13:25.592 03:01:20 -- nvmf/common.sh@469 -- # nvmfpid=1956938 00:13:25.593 03:01:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.593 03:01:20 -- nvmf/common.sh@470 -- # waitforlisten 1956938 00:13:25.593 03:01:20 -- common/autotest_common.sh@819 -- # '[' -z 1956938 ']' 00:13:25.593 03:01:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.593 03:01:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:25.593 03:01:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.593 03:01:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:25.593 03:01:20 -- common/autotest_common.sh@10 -- # set +x 00:13:25.593 [2024-07-14 03:01:20.773233] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:25.593 [2024-07-14 03:01:20.773313] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.593 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.593 [2024-07-14 03:01:20.839093] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.851 [2024-07-14 03:01:20.930753] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:25.851 [2024-07-14 03:01:20.930927] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.851 [2024-07-14 03:01:20.930949] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.851 [2024-07-14 03:01:20.930964] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.851 [2024-07-14 03:01:20.931059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.851 [2024-07-14 03:01:20.931124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.851 [2024-07-14 03:01:20.931128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.784 03:01:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:26.784 03:01:21 -- common/autotest_common.sh@852 -- # return 0 00:13:26.784 03:01:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:26.784 03:01:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:26.784 03:01:21 -- common/autotest_common.sh@10 -- # set +x 00:13:26.784 03:01:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:26.784 03:01:21 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:26.784 03:01:21 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:26.784 [2024-07-14 03:01:21.976269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:26.784 03:01:21 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:27.041 03:01:22 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.299 [2024-07-14 03:01:22.491229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.299 03:01:22 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:27.556 03:01:22 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:27.814 Malloc0 00:13:27.814 03:01:22 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:28.072 Delay0 00:13:28.072 03:01:23 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.329 03:01:23 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:28.586 NULL1 00:13:28.586 03:01:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:28.881 03:01:23 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1957374 00:13:28.881 03:01:23 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:28.881 03:01:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:28.881 03:01:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.881 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.255 Read completed with error (sct=0, sc=11) 00:13:30.255 03:01:25 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.255 03:01:25 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:30.255 03:01:25 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:30.513 true 00:13:30.513 03:01:25 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:30.513 03:01:25 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.448 03:01:26 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.448 03:01:26 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:31.448 03:01:26 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:31.706 true 00:13:31.706 03:01:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:31.706 03:01:26 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.963 03:01:27 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.221 03:01:27 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:32.221 03:01:27 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:32.479 true 00:13:32.479 03:01:27 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:32.479 03:01:27 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.411 03:01:28 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.411 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.411 03:01:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:33.411 03:01:28 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:33.669 true 00:13:33.669 03:01:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:33.669 03:01:28 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.927 03:01:29 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.185 03:01:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:34.185 03:01:29 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:34.442 true 00:13:34.442 03:01:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:34.442 03:01:29 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.376 03:01:30 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.634 03:01:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:35.634 03:01:30 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:35.892 true 00:13:35.892 03:01:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:35.892 03:01:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.150 03:01:31 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.408 03:01:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:36.408 03:01:31 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:36.665 true 00:13:36.665 03:01:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:36.665 03:01:31 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.597 03:01:32 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.598 03:01:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:37.598 03:01:32 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:37.855 true 00:13:37.855 03:01:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:37.855 03:01:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.112 03:01:33 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.369 03:01:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:38.369 03:01:33 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:38.626 true 00:13:38.626 03:01:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:38.626 03:01:33 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.556 03:01:34 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:39.813 03:01:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:39.813 03:01:35 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:40.380 true 00:13:40.380 03:01:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:40.380 03:01:35 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.947 03:01:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.205 03:01:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:41.205 03:01:36 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:41.462 true 00:13:41.462 03:01:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:41.462 03:01:36 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.720 03:01:36 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.978 03:01:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:41.978 03:01:37 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:42.235 true 00:13:42.235 03:01:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:42.235 03:01:37 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.170 03:01:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.170 03:01:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:43.170 03:01:38 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:43.428 true 00:13:43.685 03:01:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:43.685 03:01:38 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.685 03:01:38 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.970 03:01:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:43.970 03:01:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:44.234 true 00:13:44.234 03:01:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:44.234 03:01:39 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.491 03:01:39 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.749 03:01:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:44.749 03:01:39 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:45.008 true 00:13:45.008 03:01:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:45.008 03:01:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 03:01:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:46.382 03:01:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:46.382 03:01:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:46.640 true 00:13:46.640 03:01:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:46.640 03:01:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.577 03:01:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.577 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:47.577 03:01:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:47.577 03:01:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:47.835 true 00:13:47.835 03:01:42 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:47.835 03:01:42 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.095 03:01:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.353 03:01:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:48.353 03:01:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:48.610 true 00:13:48.610 03:01:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:48.610 03:01:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.544 03:01:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.804 03:01:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:49.804 03:01:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:49.804 true 00:13:50.063 03:01:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:50.063 03:01:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.063 03:01:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.321 03:01:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:50.321 03:01:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:50.579 true 00:13:50.579 03:01:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:50.579 03:01:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.518 03:01:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.518 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:51.776 03:01:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:51.776 03:01:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:52.035 true 00:13:52.035 03:01:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:52.035 03:01:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.293 03:01:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.551 03:01:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:52.551 03:01:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:52.809 true 00:13:52.809 03:01:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:52.809 03:01:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.748 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:53.748 03:01:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.748 03:01:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:53.748 03:01:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:54.006 true 00:13:54.006 03:01:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:54.006 03:01:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.265 03:01:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.523 03:01:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:54.523 03:01:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:54.782 true 00:13:54.782 03:01:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:54.782 03:01:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.718 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:55.718 03:01:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.976 03:01:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:55.976 03:01:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:56.234 true 00:13:56.234 03:01:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:56.234 03:01:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.491 03:01:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.750 03:01:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:56.750 03:01:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:57.008 true 00:13:57.008 03:01:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:57.008 03:01:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.945 03:01:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:57.945 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:58.202 03:01:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:58.202 03:01:53 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:58.459 true 00:13:58.459 03:01:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:58.459 03:01:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.715 03:01:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.993 03:01:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:58.993 03:01:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:58.993 Initializing NVMe Controllers 00:13:58.993 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.993 Controller IO queue size 128, less than required. 00:13:58.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.993 Controller IO queue size 128, less than required. 00:13:58.993 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:58.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:58.993 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:58.993 Initialization complete. Launching workers. 00:13:58.993 ======================================================== 00:13:58.993 Latency(us) 00:13:58.993 Device Information : IOPS MiB/s Average min max 00:13:58.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1148.40 0.56 63206.57 2382.02 1080252.67 00:13:58.993 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12378.11 6.04 10340.64 2413.79 443005.25 00:13:58.993 ======================================================== 00:13:58.993 Total : 13526.51 6.60 14828.96 2382.02 1080252.67 00:13:58.993 00:13:59.265 true 00:13:59.265 03:01:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1957374 00:13:59.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1957374) - No such process 00:13:59.265 03:01:54 -- target/ns_hotplug_stress.sh@53 -- # wait 1957374 00:13:59.265 03:01:54 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:59.265 03:01:54 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:59.523 03:01:54 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:59.523 03:01:54 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:59.523 03:01:54 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:59.523 03:01:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.523 03:01:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:59.781 null0 00:13:59.781 03:01:54 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:59.781 03:01:54 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:59.781 03:01:54 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:14:00.039 null1 00:14:00.039 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.039 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.039 03:01:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:14:00.296 null2 00:14:00.296 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.296 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.296 03:01:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:14:00.554 null3 00:14:00.554 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.554 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.554 03:01:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:14:00.813 null4 00:14:00.813 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:00.813 03:01:55 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:00.813 03:01:55 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:14:01.071 null5 00:14:01.071 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.071 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.071 03:01:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:14:01.329 null6 00:14:01.329 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.329 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.329 03:01:56 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:14:01.587 null7 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:14:01.587 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@66 -- # wait 1961408 1961409 1961411 1961413 1961415 1961417 1961419 1961421 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:01.588 03:01:56 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:01.845 03:01:56 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.101 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:02.359 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:02.618 03:01:57 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:02.875 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:02.875 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.875 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:02.876 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:02.876 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:02.876 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:02.876 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:02.876 03:01:57 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.133 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.390 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.390 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.390 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.390 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.390 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.391 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.391 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.391 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:03.647 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:03.648 03:01:58 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:03.648 03:01:58 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.904 03:01:58 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:03.904 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.161 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.162 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.162 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.162 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.162 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.162 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.419 03:01:59 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:04.677 03:01:59 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:04.934 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.192 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.450 03:02:00 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:05.707 03:02:00 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:05.966 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.223 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.223 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.223 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:14:06.223 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.223 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.224 03:02:01 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:14:06.507 03:02:01 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:06.765 03:02:01 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:14:06.765 03:02:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:06.765 03:02:01 -- nvmf/common.sh@116 -- # sync 00:14:06.765 03:02:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:06.765 03:02:01 -- nvmf/common.sh@119 -- # set +e 00:14:06.765 03:02:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:06.765 03:02:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:06.765 rmmod nvme_tcp 00:14:06.765 rmmod nvme_fabrics 00:14:06.765 rmmod nvme_keyring 00:14:06.765 03:02:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:06.765 03:02:01 -- nvmf/common.sh@123 -- # set -e 00:14:06.765 03:02:01 -- nvmf/common.sh@124 -- # return 0 00:14:06.765 03:02:01 -- nvmf/common.sh@477 -- # '[' -n 1956938 ']' 00:14:06.765 03:02:01 -- nvmf/common.sh@478 -- # killprocess 1956938 00:14:06.765 03:02:01 -- common/autotest_common.sh@926 -- # '[' -z 1956938 ']' 00:14:06.765 03:02:01 -- common/autotest_common.sh@930 -- # kill -0 1956938 00:14:06.765 03:02:01 -- common/autotest_common.sh@931 -- # uname 00:14:06.765 03:02:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:06.765 03:02:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1956938 00:14:06.765 03:02:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:06.765 03:02:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:06.765 03:02:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1956938' 00:14:06.765 killing process with pid 1956938 00:14:06.765 03:02:02 -- common/autotest_common.sh@945 -- # kill 1956938 00:14:06.765 03:02:02 -- common/autotest_common.sh@950 -- # wait 1956938 00:14:07.023 03:02:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:07.023 03:02:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:07.023 03:02:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:07.023 03:02:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:07.023 03:02:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:07.023 03:02:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.023 03:02:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.023 03:02:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.555 03:02:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:09.555 00:14:09.555 real 0m45.825s 00:14:09.555 user 3m25.567s 00:14:09.555 sys 0m16.322s 00:14:09.555 03:02:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:09.555 03:02:04 -- common/autotest_common.sh@10 -- # set +x 00:14:09.555 ************************************ 00:14:09.555 END TEST nvmf_ns_hotplug_stress 00:14:09.555 ************************************ 00:14:09.555 03:02:04 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:09.555 03:02:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:09.555 03:02:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:09.555 03:02:04 -- common/autotest_common.sh@10 -- # set +x 00:14:09.555 ************************************ 00:14:09.555 START TEST nvmf_connect_stress 00:14:09.555 ************************************ 00:14:09.555 03:02:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:09.555 * Looking for test storage... 00:14:09.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:09.555 03:02:04 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:09.555 03:02:04 -- nvmf/common.sh@7 -- # uname -s 00:14:09.555 03:02:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:09.555 03:02:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:09.555 03:02:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:09.555 03:02:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:09.555 03:02:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:09.555 03:02:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:09.555 03:02:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:09.555 03:02:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:09.555 03:02:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:09.555 03:02:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:09.555 03:02:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.555 03:02:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.555 03:02:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:09.555 03:02:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:09.555 03:02:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:09.555 03:02:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:09.555 03:02:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:09.555 03:02:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:09.555 03:02:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:09.555 03:02:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 03:02:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 03:02:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 03:02:04 -- paths/export.sh@5 -- # export PATH 00:14:09.555 03:02:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:09.555 03:02:04 -- nvmf/common.sh@46 -- # : 0 00:14:09.555 03:02:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:09.555 03:02:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:09.555 03:02:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:09.555 03:02:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:09.555 03:02:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:09.555 03:02:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:09.555 03:02:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:09.555 03:02:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:09.555 03:02:04 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:09.555 03:02:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:09.555 03:02:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:09.555 03:02:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:09.555 03:02:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:09.555 03:02:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:09.555 03:02:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.555 03:02:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:09.555 03:02:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:09.555 03:02:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:09.555 03:02:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:09.555 03:02:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:09.555 03:02:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.459 03:02:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:11.459 03:02:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:11.459 03:02:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:11.459 03:02:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:11.459 03:02:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:11.459 03:02:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:11.459 03:02:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:11.459 03:02:06 -- nvmf/common.sh@294 -- # net_devs=() 00:14:11.459 03:02:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:11.459 03:02:06 -- nvmf/common.sh@295 -- # e810=() 00:14:11.459 03:02:06 -- nvmf/common.sh@295 -- # local -ga e810 00:14:11.459 03:02:06 -- nvmf/common.sh@296 -- # x722=() 00:14:11.459 03:02:06 -- nvmf/common.sh@296 -- # local -ga x722 00:14:11.459 03:02:06 -- nvmf/common.sh@297 -- # mlx=() 00:14:11.459 03:02:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:11.459 03:02:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.459 03:02:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:11.459 03:02:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:11.459 03:02:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:11.459 03:02:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:11.459 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:11.459 03:02:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:11.459 03:02:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:11.459 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:11.459 03:02:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:11.459 03:02:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.459 03:02:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.459 03:02:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:11.459 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:11.459 03:02:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.459 03:02:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:11.459 03:02:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.459 03:02:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.459 03:02:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:11.459 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:11.459 03:02:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.459 03:02:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:11.459 03:02:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:11.459 03:02:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:11.459 03:02:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.459 03:02:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.459 03:02:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.459 03:02:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:11.459 03:02:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.459 03:02:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.459 03:02:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:11.459 03:02:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.459 03:02:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.459 03:02:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:11.459 03:02:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:11.459 03:02:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.459 03:02:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.459 03:02:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.459 03:02:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.459 03:02:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:11.459 03:02:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.459 03:02:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.459 03:02:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.459 03:02:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:11.459 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.459 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:14:11.459 00:14:11.459 --- 10.0.0.2 ping statistics --- 00:14:11.459 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.459 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:14:11.460 03:02:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:14:11.460 00:14:11.460 --- 10.0.0.1 ping statistics --- 00:14:11.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.460 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:11.460 03:02:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.460 03:02:06 -- nvmf/common.sh@410 -- # return 0 00:14:11.460 03:02:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:11.460 03:02:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.460 03:02:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:11.460 03:02:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:11.460 03:02:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.460 03:02:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:11.460 03:02:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:11.460 03:02:06 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:11.460 03:02:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:11.460 03:02:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:11.460 03:02:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.460 03:02:06 -- nvmf/common.sh@469 -- # nvmfpid=1964194 00:14:11.460 03:02:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:11.460 03:02:06 -- nvmf/common.sh@470 -- # waitforlisten 1964194 00:14:11.460 03:02:06 -- common/autotest_common.sh@819 -- # '[' -z 1964194 ']' 00:14:11.460 03:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.460 03:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:11.460 03:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.460 03:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:11.460 03:02:06 -- common/autotest_common.sh@10 -- # set +x 00:14:11.460 [2024-07-14 03:02:06.548556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:11.460 [2024-07-14 03:02:06.548636] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.460 EAL: No free 2048 kB hugepages reported on node 1 00:14:11.460 [2024-07-14 03:02:06.616801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:11.460 [2024-07-14 03:02:06.704561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.460 [2024-07-14 03:02:06.704730] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.460 [2024-07-14 03:02:06.704749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.460 [2024-07-14 03:02:06.704763] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.460 [2024-07-14 03:02:06.704849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:11.460 [2024-07-14 03:02:06.704976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:11.460 [2024-07-14 03:02:06.704980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.392 03:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:12.392 03:02:07 -- common/autotest_common.sh@852 -- # return 0 00:14:12.392 03:02:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:12.393 03:02:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.393 03:02:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.393 03:02:07 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.393 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.393 [2024-07-14 03:02:07.483874] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.393 03:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.393 03:02:07 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:12.393 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.393 03:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.393 03:02:07 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.393 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.393 [2024-07-14 03:02:07.511994] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.393 03:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.393 03:02:07 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:12.393 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.393 NULL1 00:14:12.393 03:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.393 03:02:07 -- target/connect_stress.sh@21 -- # PERF_PID=1964349 00:14:12.393 03:02:07 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:12.393 03:02:07 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.393 03:02:07 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:12.393 03:02:07 -- target/connect_stress.sh@28 -- # cat 00:14:12.393 03:02:07 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:12.393 03:02:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.393 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.393 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:12.650 03:02:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:12.650 03:02:07 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:12.650 03:02:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.650 03:02:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:12.650 03:02:07 -- common/autotest_common.sh@10 -- # set +x 00:14:13.215 03:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.215 03:02:08 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:13.215 03:02:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.215 03:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.215 03:02:08 -- common/autotest_common.sh@10 -- # set +x 00:14:13.473 03:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.473 03:02:08 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:13.473 03:02:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.473 03:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.473 03:02:08 -- common/autotest_common.sh@10 -- # set +x 00:14:13.730 03:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.730 03:02:08 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:13.730 03:02:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.730 03:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.730 03:02:08 -- common/autotest_common.sh@10 -- # set +x 00:14:13.988 03:02:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:13.988 03:02:09 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:13.988 03:02:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.988 03:02:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:13.988 03:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:14.245 03:02:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:14.245 03:02:09 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:14.245 03:02:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.245 03:02:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:14.245 03:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:14.811 03:02:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:14.811 03:02:09 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:14.811 03:02:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.811 03:02:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:14.811 03:02:09 -- common/autotest_common.sh@10 -- # set +x 00:14:15.068 03:02:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.068 03:02:10 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:15.068 03:02:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.068 03:02:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.068 03:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:15.324 03:02:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.324 03:02:10 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:15.324 03:02:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.324 03:02:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.324 03:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:15.581 03:02:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:15.581 03:02:10 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:15.581 03:02:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.581 03:02:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:15.581 03:02:10 -- common/autotest_common.sh@10 -- # set +x 00:14:16.147 03:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.147 03:02:11 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:16.147 03:02:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.147 03:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.147 03:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:16.405 03:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.405 03:02:11 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:16.405 03:02:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.405 03:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.405 03:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:16.664 03:02:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.664 03:02:11 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:16.664 03:02:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.664 03:02:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.664 03:02:11 -- common/autotest_common.sh@10 -- # set +x 00:14:16.922 03:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:16.922 03:02:12 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:16.922 03:02:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.922 03:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:16.922 03:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:17.182 03:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.182 03:02:12 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:17.182 03:02:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.182 03:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.182 03:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:17.749 03:02:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:17.749 03:02:12 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:17.749 03:02:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:17.749 03:02:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:17.749 03:02:12 -- common/autotest_common.sh@10 -- # set +x 00:14:18.007 03:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.007 03:02:13 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:18.007 03:02:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.007 03:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.007 03:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 03:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.266 03:02:13 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:18.266 03:02:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.266 03:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.266 03:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:18.525 03:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.525 03:02:13 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:18.525 03:02:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.525 03:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.525 03:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:18.783 03:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:18.783 03:02:13 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:18.783 03:02:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:18.783 03:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:18.783 03:02:13 -- common/autotest_common.sh@10 -- # set +x 00:14:19.348 03:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.348 03:02:14 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:19.348 03:02:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.348 03:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.348 03:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:19.607 03:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.607 03:02:14 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:19.608 03:02:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.608 03:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.608 03:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:19.866 03:02:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:19.866 03:02:14 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:19.866 03:02:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:19.866 03:02:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:19.866 03:02:14 -- common/autotest_common.sh@10 -- # set +x 00:14:20.125 03:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.125 03:02:15 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:20.125 03:02:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.125 03:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.125 03:02:15 -- common/autotest_common.sh@10 -- # set +x 00:14:20.384 03:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.384 03:02:15 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:20.384 03:02:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.384 03:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.384 03:02:15 -- common/autotest_common.sh@10 -- # set +x 00:14:20.951 03:02:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.951 03:02:15 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:20.951 03:02:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:20.951 03:02:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.951 03:02:15 -- common/autotest_common.sh@10 -- # set +x 00:14:21.210 03:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.210 03:02:16 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:21.210 03:02:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.210 03:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.210 03:02:16 -- common/autotest_common.sh@10 -- # set +x 00:14:21.470 03:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.470 03:02:16 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:21.470 03:02:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.470 03:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.470 03:02:16 -- common/autotest_common.sh@10 -- # set +x 00:14:21.728 03:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.728 03:02:16 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:21.728 03:02:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.728 03:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.728 03:02:16 -- common/autotest_common.sh@10 -- # set +x 00:14:21.985 03:02:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:21.985 03:02:17 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:21.985 03:02:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:21.985 03:02:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.985 03:02:17 -- common/autotest_common.sh@10 -- # set +x 00:14:22.554 03:02:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.554 03:02:17 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:22.554 03:02:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:22.554 03:02:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:22.554 03:02:17 -- common/autotest_common.sh@10 -- # set +x 00:14:22.813 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.813 03:02:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:22.813 03:02:17 -- target/connect_stress.sh@34 -- # kill -0 1964349 00:14:22.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1964349) - No such process 00:14:22.813 03:02:17 -- target/connect_stress.sh@38 -- # wait 1964349 00:14:22.813 03:02:17 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:22.813 03:02:17 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:22.813 03:02:17 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:22.813 03:02:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:22.813 03:02:17 -- nvmf/common.sh@116 -- # sync 00:14:22.813 03:02:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:22.813 03:02:17 -- nvmf/common.sh@119 -- # set +e 00:14:22.813 03:02:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:22.813 03:02:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:22.813 rmmod nvme_tcp 00:14:22.813 rmmod nvme_fabrics 00:14:22.813 rmmod nvme_keyring 00:14:22.813 03:02:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:22.813 03:02:17 -- nvmf/common.sh@123 -- # set -e 00:14:22.813 03:02:17 -- nvmf/common.sh@124 -- # return 0 00:14:22.813 03:02:17 -- nvmf/common.sh@477 -- # '[' -n 1964194 ']' 00:14:22.813 03:02:17 -- nvmf/common.sh@478 -- # killprocess 1964194 00:14:22.813 03:02:17 -- common/autotest_common.sh@926 -- # '[' -z 1964194 ']' 00:14:22.813 03:02:17 -- common/autotest_common.sh@930 -- # kill -0 1964194 00:14:22.813 03:02:17 -- common/autotest_common.sh@931 -- # uname 00:14:22.813 03:02:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:22.813 03:02:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1964194 00:14:22.813 03:02:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:22.813 03:02:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:22.813 03:02:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1964194' 00:14:22.813 killing process with pid 1964194 00:14:22.813 03:02:17 -- common/autotest_common.sh@945 -- # kill 1964194 00:14:22.813 03:02:17 -- common/autotest_common.sh@950 -- # wait 1964194 00:14:23.072 03:02:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:23.072 03:02:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:23.072 03:02:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:23.072 03:02:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:23.072 03:02:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:23.072 03:02:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.072 03:02:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.072 03:02:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.977 03:02:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:24.977 00:14:24.977 real 0m15.930s 00:14:24.977 user 0m40.429s 00:14:24.977 sys 0m6.001s 00:14:24.977 03:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:24.977 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:14:24.977 ************************************ 00:14:24.977 END TEST nvmf_connect_stress 00:14:24.977 ************************************ 00:14:25.234 03:02:20 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:25.234 03:02:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:25.234 03:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:25.234 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:14:25.234 ************************************ 00:14:25.234 START TEST nvmf_fused_ordering 00:14:25.234 ************************************ 00:14:25.234 03:02:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:25.234 * Looking for test storage... 00:14:25.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:25.234 03:02:20 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:25.234 03:02:20 -- nvmf/common.sh@7 -- # uname -s 00:14:25.234 03:02:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.234 03:02:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.234 03:02:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.234 03:02:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.235 03:02:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.235 03:02:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.235 03:02:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.235 03:02:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.235 03:02:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.235 03:02:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.235 03:02:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.235 03:02:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.235 03:02:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.235 03:02:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.235 03:02:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.235 03:02:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:25.235 03:02:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.235 03:02:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.235 03:02:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.235 03:02:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.235 03:02:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.235 03:02:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.235 03:02:20 -- paths/export.sh@5 -- # export PATH 00:14:25.235 03:02:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.235 03:02:20 -- nvmf/common.sh@46 -- # : 0 00:14:25.235 03:02:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:25.235 03:02:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:25.235 03:02:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:25.235 03:02:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.235 03:02:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.235 03:02:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:25.235 03:02:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:25.235 03:02:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:25.235 03:02:20 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:25.235 03:02:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:25.235 03:02:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:25.235 03:02:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:25.235 03:02:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:25.235 03:02:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:25.235 03:02:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.235 03:02:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.235 03:02:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.235 03:02:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:25.235 03:02:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:25.235 03:02:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:25.235 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:14:27.137 03:02:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:27.137 03:02:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:27.137 03:02:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:27.137 03:02:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:27.137 03:02:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:27.137 03:02:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:27.137 03:02:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:27.137 03:02:22 -- nvmf/common.sh@294 -- # net_devs=() 00:14:27.137 03:02:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:27.137 03:02:22 -- nvmf/common.sh@295 -- # e810=() 00:14:27.137 03:02:22 -- nvmf/common.sh@295 -- # local -ga e810 00:14:27.137 03:02:22 -- nvmf/common.sh@296 -- # x722=() 00:14:27.137 03:02:22 -- nvmf/common.sh@296 -- # local -ga x722 00:14:27.137 03:02:22 -- nvmf/common.sh@297 -- # mlx=() 00:14:27.137 03:02:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:27.137 03:02:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.137 03:02:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:27.137 03:02:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:27.137 03:02:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:27.137 03:02:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:27.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:27.137 03:02:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:27.137 03:02:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:27.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:27.137 03:02:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:27.137 03:02:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.137 03:02:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.137 03:02:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:27.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:27.137 03:02:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.137 03:02:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:27.137 03:02:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.137 03:02:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.137 03:02:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:27.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:27.137 03:02:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.137 03:02:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:27.137 03:02:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:27.137 03:02:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:27.137 03:02:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.137 03:02:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.137 03:02:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.137 03:02:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:27.137 03:02:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.137 03:02:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.137 03:02:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:27.137 03:02:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.137 03:02:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.137 03:02:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:27.137 03:02:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:27.137 03:02:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.137 03:02:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.137 03:02:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.137 03:02:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.137 03:02:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:27.137 03:02:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.137 03:02:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.137 03:02:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.137 03:02:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:27.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:14:27.137 00:14:27.137 --- 10.0.0.2 ping statistics --- 00:14:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.137 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:14:27.137 03:02:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:14:27.137 00:14:27.137 --- 10.0.0.1 ping statistics --- 00:14:27.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.138 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:14:27.138 03:02:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.138 03:02:22 -- nvmf/common.sh@410 -- # return 0 00:14:27.138 03:02:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:27.138 03:02:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.138 03:02:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:27.138 03:02:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:27.138 03:02:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.138 03:02:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:27.138 03:02:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:27.138 03:02:22 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:27.138 03:02:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:27.138 03:02:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:27.138 03:02:22 -- common/autotest_common.sh@10 -- # set +x 00:14:27.138 03:02:22 -- nvmf/common.sh@469 -- # nvmfpid=1967541 00:14:27.138 03:02:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:27.138 03:02:22 -- nvmf/common.sh@470 -- # waitforlisten 1967541 00:14:27.138 03:02:22 -- common/autotest_common.sh@819 -- # '[' -z 1967541 ']' 00:14:27.138 03:02:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.138 03:02:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:27.138 03:02:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.138 03:02:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:27.138 03:02:22 -- common/autotest_common.sh@10 -- # set +x 00:14:27.138 [2024-07-14 03:02:22.379158] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:27.138 [2024-07-14 03:02:22.379251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:27.397 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.397 [2024-07-14 03:02:22.445362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.397 [2024-07-14 03:02:22.530563] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:27.397 [2024-07-14 03:02:22.530703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:27.397 [2024-07-14 03:02:22.530720] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:27.397 [2024-07-14 03:02:22.530732] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:27.397 [2024-07-14 03:02:22.530758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.334 03:02:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:28.334 03:02:23 -- common/autotest_common.sh@852 -- # return 0 00:14:28.334 03:02:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:28.334 03:02:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:28.334 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.334 03:02:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.334 03:02:23 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:28.334 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.334 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.334 [2024-07-14 03:02:23.368098] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.334 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.334 03:02:23 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:28.334 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.334 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.334 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.334 03:02:23 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:28.334 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.334 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.334 [2024-07-14 03:02:23.384275] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:28.334 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.334 03:02:23 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:28.334 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.334 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.334 NULL1 00:14:28.334 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.334 03:02:23 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:28.335 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.335 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.335 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.335 03:02:23 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:28.335 03:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:28.335 03:02:23 -- common/autotest_common.sh@10 -- # set +x 00:14:28.335 03:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:28.335 03:02:23 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:28.335 [2024-07-14 03:02:23.427472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:28.335 [2024-07-14 03:02:23.427513] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967700 ] 00:14:28.335 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.902 Attached to nqn.2016-06.io.spdk:cnode1 00:14:28.902 Namespace ID: 1 size: 1GB 00:14:28.902 fused_ordering(0) 00:14:28.902 fused_ordering(1) 00:14:28.902 fused_ordering(2) 00:14:28.902 fused_ordering(3) 00:14:28.902 fused_ordering(4) 00:14:28.902 fused_ordering(5) 00:14:28.902 fused_ordering(6) 00:14:28.902 fused_ordering(7) 00:14:28.902 fused_ordering(8) 00:14:28.902 fused_ordering(9) 00:14:28.902 fused_ordering(10) 00:14:28.902 fused_ordering(11) 00:14:28.902 fused_ordering(12) 00:14:28.902 fused_ordering(13) 00:14:28.902 fused_ordering(14) 00:14:28.902 fused_ordering(15) 00:14:28.902 fused_ordering(16) 00:14:28.902 fused_ordering(17) 00:14:28.902 fused_ordering(18) 00:14:28.902 fused_ordering(19) 00:14:28.902 fused_ordering(20) 00:14:28.902 fused_ordering(21) 00:14:28.902 fused_ordering(22) 00:14:28.902 fused_ordering(23) 00:14:28.902 fused_ordering(24) 00:14:28.902 fused_ordering(25) 00:14:28.902 fused_ordering(26) 00:14:28.902 fused_ordering(27) 00:14:28.902 fused_ordering(28) 00:14:28.902 fused_ordering(29) 00:14:28.902 fused_ordering(30) 00:14:28.902 fused_ordering(31) 00:14:28.902 fused_ordering(32) 00:14:28.902 fused_ordering(33) 00:14:28.902 fused_ordering(34) 00:14:28.902 fused_ordering(35) 00:14:28.902 fused_ordering(36) 00:14:28.902 fused_ordering(37) 00:14:28.902 fused_ordering(38) 00:14:28.902 fused_ordering(39) 00:14:28.902 fused_ordering(40) 00:14:28.902 fused_ordering(41) 00:14:28.902 fused_ordering(42) 00:14:28.902 fused_ordering(43) 00:14:28.902 fused_ordering(44) 00:14:28.902 fused_ordering(45) 00:14:28.902 fused_ordering(46) 00:14:28.902 fused_ordering(47) 00:14:28.902 fused_ordering(48) 00:14:28.902 fused_ordering(49) 00:14:28.902 fused_ordering(50) 00:14:28.902 fused_ordering(51) 00:14:28.902 fused_ordering(52) 00:14:28.902 fused_ordering(53) 00:14:28.902 fused_ordering(54) 00:14:28.902 fused_ordering(55) 00:14:28.902 fused_ordering(56) 00:14:28.902 fused_ordering(57) 00:14:28.902 fused_ordering(58) 00:14:28.902 fused_ordering(59) 00:14:28.902 fused_ordering(60) 00:14:28.902 fused_ordering(61) 00:14:28.902 fused_ordering(62) 00:14:28.902 fused_ordering(63) 00:14:28.902 fused_ordering(64) 00:14:28.902 fused_ordering(65) 00:14:28.902 fused_ordering(66) 00:14:28.902 fused_ordering(67) 00:14:28.902 fused_ordering(68) 00:14:28.902 fused_ordering(69) 00:14:28.902 fused_ordering(70) 00:14:28.902 fused_ordering(71) 00:14:28.902 fused_ordering(72) 00:14:28.902 fused_ordering(73) 00:14:28.902 fused_ordering(74) 00:14:28.902 fused_ordering(75) 00:14:28.902 fused_ordering(76) 00:14:28.902 fused_ordering(77) 00:14:28.902 fused_ordering(78) 00:14:28.902 fused_ordering(79) 00:14:28.902 fused_ordering(80) 00:14:28.902 fused_ordering(81) 00:14:28.902 fused_ordering(82) 00:14:28.902 fused_ordering(83) 00:14:28.902 fused_ordering(84) 00:14:28.902 fused_ordering(85) 00:14:28.902 fused_ordering(86) 00:14:28.902 fused_ordering(87) 00:14:28.902 fused_ordering(88) 00:14:28.902 fused_ordering(89) 00:14:28.902 fused_ordering(90) 00:14:28.902 fused_ordering(91) 00:14:28.902 fused_ordering(92) 00:14:28.902 fused_ordering(93) 00:14:28.902 fused_ordering(94) 00:14:28.902 fused_ordering(95) 00:14:28.902 fused_ordering(96) 00:14:28.902 fused_ordering(97) 00:14:28.902 fused_ordering(98) 00:14:28.902 fused_ordering(99) 00:14:28.902 fused_ordering(100) 00:14:28.902 fused_ordering(101) 00:14:28.902 fused_ordering(102) 00:14:28.902 fused_ordering(103) 00:14:28.902 fused_ordering(104) 00:14:28.902 fused_ordering(105) 00:14:28.902 fused_ordering(106) 00:14:28.902 fused_ordering(107) 00:14:28.902 fused_ordering(108) 00:14:28.902 fused_ordering(109) 00:14:28.902 fused_ordering(110) 00:14:28.902 fused_ordering(111) 00:14:28.902 fused_ordering(112) 00:14:28.902 fused_ordering(113) 00:14:28.902 fused_ordering(114) 00:14:28.902 fused_ordering(115) 00:14:28.902 fused_ordering(116) 00:14:28.902 fused_ordering(117) 00:14:28.902 fused_ordering(118) 00:14:28.902 fused_ordering(119) 00:14:28.902 fused_ordering(120) 00:14:28.902 fused_ordering(121) 00:14:28.902 fused_ordering(122) 00:14:28.902 fused_ordering(123) 00:14:28.902 fused_ordering(124) 00:14:28.902 fused_ordering(125) 00:14:28.902 fused_ordering(126) 00:14:28.902 fused_ordering(127) 00:14:28.902 fused_ordering(128) 00:14:28.902 fused_ordering(129) 00:14:28.902 fused_ordering(130) 00:14:28.902 fused_ordering(131) 00:14:28.902 fused_ordering(132) 00:14:28.902 fused_ordering(133) 00:14:28.902 fused_ordering(134) 00:14:28.902 fused_ordering(135) 00:14:28.902 fused_ordering(136) 00:14:28.902 fused_ordering(137) 00:14:28.902 fused_ordering(138) 00:14:28.902 fused_ordering(139) 00:14:28.902 fused_ordering(140) 00:14:28.902 fused_ordering(141) 00:14:28.902 fused_ordering(142) 00:14:28.902 fused_ordering(143) 00:14:28.902 fused_ordering(144) 00:14:28.902 fused_ordering(145) 00:14:28.902 fused_ordering(146) 00:14:28.902 fused_ordering(147) 00:14:28.902 fused_ordering(148) 00:14:28.902 fused_ordering(149) 00:14:28.902 fused_ordering(150) 00:14:28.902 fused_ordering(151) 00:14:28.902 fused_ordering(152) 00:14:28.902 fused_ordering(153) 00:14:28.902 fused_ordering(154) 00:14:28.902 fused_ordering(155) 00:14:28.902 fused_ordering(156) 00:14:28.902 fused_ordering(157) 00:14:28.902 fused_ordering(158) 00:14:28.902 fused_ordering(159) 00:14:28.902 fused_ordering(160) 00:14:28.903 fused_ordering(161) 00:14:28.903 fused_ordering(162) 00:14:28.903 fused_ordering(163) 00:14:28.903 fused_ordering(164) 00:14:28.903 fused_ordering(165) 00:14:28.903 fused_ordering(166) 00:14:28.903 fused_ordering(167) 00:14:28.903 fused_ordering(168) 00:14:28.903 fused_ordering(169) 00:14:28.903 fused_ordering(170) 00:14:28.903 fused_ordering(171) 00:14:28.903 fused_ordering(172) 00:14:28.903 fused_ordering(173) 00:14:28.903 fused_ordering(174) 00:14:28.903 fused_ordering(175) 00:14:28.903 fused_ordering(176) 00:14:28.903 fused_ordering(177) 00:14:28.903 fused_ordering(178) 00:14:28.903 fused_ordering(179) 00:14:28.903 fused_ordering(180) 00:14:28.903 fused_ordering(181) 00:14:28.903 fused_ordering(182) 00:14:28.903 fused_ordering(183) 00:14:28.903 fused_ordering(184) 00:14:28.903 fused_ordering(185) 00:14:28.903 fused_ordering(186) 00:14:28.903 fused_ordering(187) 00:14:28.903 fused_ordering(188) 00:14:28.903 fused_ordering(189) 00:14:28.903 fused_ordering(190) 00:14:28.903 fused_ordering(191) 00:14:28.903 fused_ordering(192) 00:14:28.903 fused_ordering(193) 00:14:28.903 fused_ordering(194) 00:14:28.903 fused_ordering(195) 00:14:28.903 fused_ordering(196) 00:14:28.903 fused_ordering(197) 00:14:28.903 fused_ordering(198) 00:14:28.903 fused_ordering(199) 00:14:28.903 fused_ordering(200) 00:14:28.903 fused_ordering(201) 00:14:28.903 fused_ordering(202) 00:14:28.903 fused_ordering(203) 00:14:28.903 fused_ordering(204) 00:14:28.903 fused_ordering(205) 00:14:29.475 fused_ordering(206) 00:14:29.475 fused_ordering(207) 00:14:29.475 fused_ordering(208) 00:14:29.475 fused_ordering(209) 00:14:29.475 fused_ordering(210) 00:14:29.475 fused_ordering(211) 00:14:29.475 fused_ordering(212) 00:14:29.475 fused_ordering(213) 00:14:29.475 fused_ordering(214) 00:14:29.475 fused_ordering(215) 00:14:29.475 fused_ordering(216) 00:14:29.475 fused_ordering(217) 00:14:29.475 fused_ordering(218) 00:14:29.475 fused_ordering(219) 00:14:29.475 fused_ordering(220) 00:14:29.475 fused_ordering(221) 00:14:29.475 fused_ordering(222) 00:14:29.475 fused_ordering(223) 00:14:29.475 fused_ordering(224) 00:14:29.475 fused_ordering(225) 00:14:29.475 fused_ordering(226) 00:14:29.475 fused_ordering(227) 00:14:29.475 fused_ordering(228) 00:14:29.475 fused_ordering(229) 00:14:29.475 fused_ordering(230) 00:14:29.475 fused_ordering(231) 00:14:29.475 fused_ordering(232) 00:14:29.475 fused_ordering(233) 00:14:29.475 fused_ordering(234) 00:14:29.475 fused_ordering(235) 00:14:29.475 fused_ordering(236) 00:14:29.475 fused_ordering(237) 00:14:29.475 fused_ordering(238) 00:14:29.475 fused_ordering(239) 00:14:29.475 fused_ordering(240) 00:14:29.475 fused_ordering(241) 00:14:29.475 fused_ordering(242) 00:14:29.475 fused_ordering(243) 00:14:29.475 fused_ordering(244) 00:14:29.475 fused_ordering(245) 00:14:29.475 fused_ordering(246) 00:14:29.475 fused_ordering(247) 00:14:29.475 fused_ordering(248) 00:14:29.475 fused_ordering(249) 00:14:29.475 fused_ordering(250) 00:14:29.475 fused_ordering(251) 00:14:29.475 fused_ordering(252) 00:14:29.475 fused_ordering(253) 00:14:29.475 fused_ordering(254) 00:14:29.475 fused_ordering(255) 00:14:29.475 fused_ordering(256) 00:14:29.475 fused_ordering(257) 00:14:29.475 fused_ordering(258) 00:14:29.475 fused_ordering(259) 00:14:29.475 fused_ordering(260) 00:14:29.475 fused_ordering(261) 00:14:29.475 fused_ordering(262) 00:14:29.475 fused_ordering(263) 00:14:29.475 fused_ordering(264) 00:14:29.475 fused_ordering(265) 00:14:29.475 fused_ordering(266) 00:14:29.475 fused_ordering(267) 00:14:29.475 fused_ordering(268) 00:14:29.475 fused_ordering(269) 00:14:29.475 fused_ordering(270) 00:14:29.475 fused_ordering(271) 00:14:29.475 fused_ordering(272) 00:14:29.475 fused_ordering(273) 00:14:29.475 fused_ordering(274) 00:14:29.475 fused_ordering(275) 00:14:29.475 fused_ordering(276) 00:14:29.475 fused_ordering(277) 00:14:29.475 fused_ordering(278) 00:14:29.475 fused_ordering(279) 00:14:29.475 fused_ordering(280) 00:14:29.475 fused_ordering(281) 00:14:29.475 fused_ordering(282) 00:14:29.475 fused_ordering(283) 00:14:29.475 fused_ordering(284) 00:14:29.475 fused_ordering(285) 00:14:29.475 fused_ordering(286) 00:14:29.475 fused_ordering(287) 00:14:29.475 fused_ordering(288) 00:14:29.475 fused_ordering(289) 00:14:29.475 fused_ordering(290) 00:14:29.475 fused_ordering(291) 00:14:29.475 fused_ordering(292) 00:14:29.475 fused_ordering(293) 00:14:29.475 fused_ordering(294) 00:14:29.475 fused_ordering(295) 00:14:29.475 fused_ordering(296) 00:14:29.475 fused_ordering(297) 00:14:29.475 fused_ordering(298) 00:14:29.475 fused_ordering(299) 00:14:29.475 fused_ordering(300) 00:14:29.475 fused_ordering(301) 00:14:29.475 fused_ordering(302) 00:14:29.475 fused_ordering(303) 00:14:29.475 fused_ordering(304) 00:14:29.475 fused_ordering(305) 00:14:29.475 fused_ordering(306) 00:14:29.475 fused_ordering(307) 00:14:29.475 fused_ordering(308) 00:14:29.475 fused_ordering(309) 00:14:29.475 fused_ordering(310) 00:14:29.475 fused_ordering(311) 00:14:29.475 fused_ordering(312) 00:14:29.475 fused_ordering(313) 00:14:29.475 fused_ordering(314) 00:14:29.475 fused_ordering(315) 00:14:29.475 fused_ordering(316) 00:14:29.475 fused_ordering(317) 00:14:29.475 fused_ordering(318) 00:14:29.475 fused_ordering(319) 00:14:29.475 fused_ordering(320) 00:14:29.475 fused_ordering(321) 00:14:29.475 fused_ordering(322) 00:14:29.475 fused_ordering(323) 00:14:29.475 fused_ordering(324) 00:14:29.475 fused_ordering(325) 00:14:29.475 fused_ordering(326) 00:14:29.475 fused_ordering(327) 00:14:29.475 fused_ordering(328) 00:14:29.475 fused_ordering(329) 00:14:29.475 fused_ordering(330) 00:14:29.475 fused_ordering(331) 00:14:29.475 fused_ordering(332) 00:14:29.475 fused_ordering(333) 00:14:29.475 fused_ordering(334) 00:14:29.475 fused_ordering(335) 00:14:29.475 fused_ordering(336) 00:14:29.475 fused_ordering(337) 00:14:29.475 fused_ordering(338) 00:14:29.475 fused_ordering(339) 00:14:29.475 fused_ordering(340) 00:14:29.475 fused_ordering(341) 00:14:29.475 fused_ordering(342) 00:14:29.475 fused_ordering(343) 00:14:29.475 fused_ordering(344) 00:14:29.475 fused_ordering(345) 00:14:29.475 fused_ordering(346) 00:14:29.475 fused_ordering(347) 00:14:29.475 fused_ordering(348) 00:14:29.475 fused_ordering(349) 00:14:29.475 fused_ordering(350) 00:14:29.475 fused_ordering(351) 00:14:29.475 fused_ordering(352) 00:14:29.475 fused_ordering(353) 00:14:29.475 fused_ordering(354) 00:14:29.475 fused_ordering(355) 00:14:29.475 fused_ordering(356) 00:14:29.475 fused_ordering(357) 00:14:29.475 fused_ordering(358) 00:14:29.475 fused_ordering(359) 00:14:29.475 fused_ordering(360) 00:14:29.475 fused_ordering(361) 00:14:29.475 fused_ordering(362) 00:14:29.475 fused_ordering(363) 00:14:29.475 fused_ordering(364) 00:14:29.475 fused_ordering(365) 00:14:29.475 fused_ordering(366) 00:14:29.475 fused_ordering(367) 00:14:29.475 fused_ordering(368) 00:14:29.475 fused_ordering(369) 00:14:29.475 fused_ordering(370) 00:14:29.475 fused_ordering(371) 00:14:29.475 fused_ordering(372) 00:14:29.475 fused_ordering(373) 00:14:29.475 fused_ordering(374) 00:14:29.475 fused_ordering(375) 00:14:29.475 fused_ordering(376) 00:14:29.475 fused_ordering(377) 00:14:29.475 fused_ordering(378) 00:14:29.475 fused_ordering(379) 00:14:29.476 fused_ordering(380) 00:14:29.476 fused_ordering(381) 00:14:29.476 fused_ordering(382) 00:14:29.476 fused_ordering(383) 00:14:29.476 fused_ordering(384) 00:14:29.476 fused_ordering(385) 00:14:29.476 fused_ordering(386) 00:14:29.476 fused_ordering(387) 00:14:29.476 fused_ordering(388) 00:14:29.476 fused_ordering(389) 00:14:29.476 fused_ordering(390) 00:14:29.476 fused_ordering(391) 00:14:29.476 fused_ordering(392) 00:14:29.476 fused_ordering(393) 00:14:29.476 fused_ordering(394) 00:14:29.476 fused_ordering(395) 00:14:29.476 fused_ordering(396) 00:14:29.476 fused_ordering(397) 00:14:29.476 fused_ordering(398) 00:14:29.476 fused_ordering(399) 00:14:29.476 fused_ordering(400) 00:14:29.476 fused_ordering(401) 00:14:29.476 fused_ordering(402) 00:14:29.476 fused_ordering(403) 00:14:29.476 fused_ordering(404) 00:14:29.476 fused_ordering(405) 00:14:29.476 fused_ordering(406) 00:14:29.476 fused_ordering(407) 00:14:29.476 fused_ordering(408) 00:14:29.476 fused_ordering(409) 00:14:29.476 fused_ordering(410) 00:14:30.450 fused_ordering(411) 00:14:30.450 fused_ordering(412) 00:14:30.450 fused_ordering(413) 00:14:30.450 fused_ordering(414) 00:14:30.450 fused_ordering(415) 00:14:30.450 fused_ordering(416) 00:14:30.450 fused_ordering(417) 00:14:30.450 fused_ordering(418) 00:14:30.450 fused_ordering(419) 00:14:30.450 fused_ordering(420) 00:14:30.450 fused_ordering(421) 00:14:30.450 fused_ordering(422) 00:14:30.450 fused_ordering(423) 00:14:30.450 fused_ordering(424) 00:14:30.450 fused_ordering(425) 00:14:30.450 fused_ordering(426) 00:14:30.450 fused_ordering(427) 00:14:30.450 fused_ordering(428) 00:14:30.450 fused_ordering(429) 00:14:30.450 fused_ordering(430) 00:14:30.450 fused_ordering(431) 00:14:30.450 fused_ordering(432) 00:14:30.450 fused_ordering(433) 00:14:30.450 fused_ordering(434) 00:14:30.450 fused_ordering(435) 00:14:30.450 fused_ordering(436) 00:14:30.450 fused_ordering(437) 00:14:30.450 fused_ordering(438) 00:14:30.450 fused_ordering(439) 00:14:30.450 fused_ordering(440) 00:14:30.450 fused_ordering(441) 00:14:30.450 fused_ordering(442) 00:14:30.450 fused_ordering(443) 00:14:30.450 fused_ordering(444) 00:14:30.450 fused_ordering(445) 00:14:30.450 fused_ordering(446) 00:14:30.450 fused_ordering(447) 00:14:30.450 fused_ordering(448) 00:14:30.450 fused_ordering(449) 00:14:30.450 fused_ordering(450) 00:14:30.450 fused_ordering(451) 00:14:30.450 fused_ordering(452) 00:14:30.450 fused_ordering(453) 00:14:30.450 fused_ordering(454) 00:14:30.450 fused_ordering(455) 00:14:30.450 fused_ordering(456) 00:14:30.450 fused_ordering(457) 00:14:30.450 fused_ordering(458) 00:14:30.450 fused_ordering(459) 00:14:30.450 fused_ordering(460) 00:14:30.450 fused_ordering(461) 00:14:30.450 fused_ordering(462) 00:14:30.450 fused_ordering(463) 00:14:30.450 fused_ordering(464) 00:14:30.450 fused_ordering(465) 00:14:30.450 fused_ordering(466) 00:14:30.450 fused_ordering(467) 00:14:30.450 fused_ordering(468) 00:14:30.450 fused_ordering(469) 00:14:30.450 fused_ordering(470) 00:14:30.450 fused_ordering(471) 00:14:30.450 fused_ordering(472) 00:14:30.450 fused_ordering(473) 00:14:30.450 fused_ordering(474) 00:14:30.450 fused_ordering(475) 00:14:30.450 fused_ordering(476) 00:14:30.450 fused_ordering(477) 00:14:30.450 fused_ordering(478) 00:14:30.450 fused_ordering(479) 00:14:30.450 fused_ordering(480) 00:14:30.450 fused_ordering(481) 00:14:30.450 fused_ordering(482) 00:14:30.450 fused_ordering(483) 00:14:30.450 fused_ordering(484) 00:14:30.450 fused_ordering(485) 00:14:30.450 fused_ordering(486) 00:14:30.450 fused_ordering(487) 00:14:30.450 fused_ordering(488) 00:14:30.450 fused_ordering(489) 00:14:30.450 fused_ordering(490) 00:14:30.450 fused_ordering(491) 00:14:30.450 fused_ordering(492) 00:14:30.450 fused_ordering(493) 00:14:30.450 fused_ordering(494) 00:14:30.450 fused_ordering(495) 00:14:30.450 fused_ordering(496) 00:14:30.450 fused_ordering(497) 00:14:30.450 fused_ordering(498) 00:14:30.450 fused_ordering(499) 00:14:30.450 fused_ordering(500) 00:14:30.450 fused_ordering(501) 00:14:30.450 fused_ordering(502) 00:14:30.450 fused_ordering(503) 00:14:30.450 fused_ordering(504) 00:14:30.450 fused_ordering(505) 00:14:30.450 fused_ordering(506) 00:14:30.450 fused_ordering(507) 00:14:30.450 fused_ordering(508) 00:14:30.450 fused_ordering(509) 00:14:30.450 fused_ordering(510) 00:14:30.450 fused_ordering(511) 00:14:30.450 fused_ordering(512) 00:14:30.450 fused_ordering(513) 00:14:30.450 fused_ordering(514) 00:14:30.450 fused_ordering(515) 00:14:30.450 fused_ordering(516) 00:14:30.450 fused_ordering(517) 00:14:30.450 fused_ordering(518) 00:14:30.450 fused_ordering(519) 00:14:30.450 fused_ordering(520) 00:14:30.450 fused_ordering(521) 00:14:30.450 fused_ordering(522) 00:14:30.450 fused_ordering(523) 00:14:30.450 fused_ordering(524) 00:14:30.450 fused_ordering(525) 00:14:30.450 fused_ordering(526) 00:14:30.450 fused_ordering(527) 00:14:30.450 fused_ordering(528) 00:14:30.450 fused_ordering(529) 00:14:30.450 fused_ordering(530) 00:14:30.450 fused_ordering(531) 00:14:30.450 fused_ordering(532) 00:14:30.450 fused_ordering(533) 00:14:30.450 fused_ordering(534) 00:14:30.450 fused_ordering(535) 00:14:30.450 fused_ordering(536) 00:14:30.450 fused_ordering(537) 00:14:30.450 fused_ordering(538) 00:14:30.450 fused_ordering(539) 00:14:30.450 fused_ordering(540) 00:14:30.450 fused_ordering(541) 00:14:30.450 fused_ordering(542) 00:14:30.450 fused_ordering(543) 00:14:30.450 fused_ordering(544) 00:14:30.450 fused_ordering(545) 00:14:30.450 fused_ordering(546) 00:14:30.450 fused_ordering(547) 00:14:30.450 fused_ordering(548) 00:14:30.450 fused_ordering(549) 00:14:30.450 fused_ordering(550) 00:14:30.450 fused_ordering(551) 00:14:30.450 fused_ordering(552) 00:14:30.450 fused_ordering(553) 00:14:30.450 fused_ordering(554) 00:14:30.450 fused_ordering(555) 00:14:30.450 fused_ordering(556) 00:14:30.450 fused_ordering(557) 00:14:30.450 fused_ordering(558) 00:14:30.450 fused_ordering(559) 00:14:30.450 fused_ordering(560) 00:14:30.450 fused_ordering(561) 00:14:30.450 fused_ordering(562) 00:14:30.450 fused_ordering(563) 00:14:30.450 fused_ordering(564) 00:14:30.450 fused_ordering(565) 00:14:30.450 fused_ordering(566) 00:14:30.450 fused_ordering(567) 00:14:30.450 fused_ordering(568) 00:14:30.450 fused_ordering(569) 00:14:30.450 fused_ordering(570) 00:14:30.450 fused_ordering(571) 00:14:30.450 fused_ordering(572) 00:14:30.450 fused_ordering(573) 00:14:30.450 fused_ordering(574) 00:14:30.450 fused_ordering(575) 00:14:30.450 fused_ordering(576) 00:14:30.450 fused_ordering(577) 00:14:30.450 fused_ordering(578) 00:14:30.450 fused_ordering(579) 00:14:30.450 fused_ordering(580) 00:14:30.450 fused_ordering(581) 00:14:30.450 fused_ordering(582) 00:14:30.450 fused_ordering(583) 00:14:30.450 fused_ordering(584) 00:14:30.450 fused_ordering(585) 00:14:30.450 fused_ordering(586) 00:14:30.450 fused_ordering(587) 00:14:30.450 fused_ordering(588) 00:14:30.450 fused_ordering(589) 00:14:30.450 fused_ordering(590) 00:14:30.450 fused_ordering(591) 00:14:30.450 fused_ordering(592) 00:14:30.450 fused_ordering(593) 00:14:30.450 fused_ordering(594) 00:14:30.450 fused_ordering(595) 00:14:30.450 fused_ordering(596) 00:14:30.450 fused_ordering(597) 00:14:30.450 fused_ordering(598) 00:14:30.450 fused_ordering(599) 00:14:30.450 fused_ordering(600) 00:14:30.451 fused_ordering(601) 00:14:30.451 fused_ordering(602) 00:14:30.451 fused_ordering(603) 00:14:30.451 fused_ordering(604) 00:14:30.451 fused_ordering(605) 00:14:30.451 fused_ordering(606) 00:14:30.451 fused_ordering(607) 00:14:30.451 fused_ordering(608) 00:14:30.451 fused_ordering(609) 00:14:30.451 fused_ordering(610) 00:14:30.451 fused_ordering(611) 00:14:30.451 fused_ordering(612) 00:14:30.451 fused_ordering(613) 00:14:30.451 fused_ordering(614) 00:14:30.451 fused_ordering(615) 00:14:31.019 fused_ordering(616) 00:14:31.019 fused_ordering(617) 00:14:31.019 fused_ordering(618) 00:14:31.019 fused_ordering(619) 00:14:31.019 fused_ordering(620) 00:14:31.019 fused_ordering(621) 00:14:31.019 fused_ordering(622) 00:14:31.019 fused_ordering(623) 00:14:31.019 fused_ordering(624) 00:14:31.019 fused_ordering(625) 00:14:31.019 fused_ordering(626) 00:14:31.019 fused_ordering(627) 00:14:31.019 fused_ordering(628) 00:14:31.019 fused_ordering(629) 00:14:31.019 fused_ordering(630) 00:14:31.019 fused_ordering(631) 00:14:31.019 fused_ordering(632) 00:14:31.019 fused_ordering(633) 00:14:31.019 fused_ordering(634) 00:14:31.019 fused_ordering(635) 00:14:31.019 fused_ordering(636) 00:14:31.019 fused_ordering(637) 00:14:31.019 fused_ordering(638) 00:14:31.019 fused_ordering(639) 00:14:31.019 fused_ordering(640) 00:14:31.019 fused_ordering(641) 00:14:31.019 fused_ordering(642) 00:14:31.019 fused_ordering(643) 00:14:31.019 fused_ordering(644) 00:14:31.019 fused_ordering(645) 00:14:31.019 fused_ordering(646) 00:14:31.019 fused_ordering(647) 00:14:31.019 fused_ordering(648) 00:14:31.019 fused_ordering(649) 00:14:31.019 fused_ordering(650) 00:14:31.019 fused_ordering(651) 00:14:31.019 fused_ordering(652) 00:14:31.019 fused_ordering(653) 00:14:31.019 fused_ordering(654) 00:14:31.019 fused_ordering(655) 00:14:31.019 fused_ordering(656) 00:14:31.019 fused_ordering(657) 00:14:31.019 fused_ordering(658) 00:14:31.019 fused_ordering(659) 00:14:31.019 fused_ordering(660) 00:14:31.019 fused_ordering(661) 00:14:31.019 fused_ordering(662) 00:14:31.019 fused_ordering(663) 00:14:31.019 fused_ordering(664) 00:14:31.019 fused_ordering(665) 00:14:31.019 fused_ordering(666) 00:14:31.019 fused_ordering(667) 00:14:31.019 fused_ordering(668) 00:14:31.019 fused_ordering(669) 00:14:31.019 fused_ordering(670) 00:14:31.019 fused_ordering(671) 00:14:31.019 fused_ordering(672) 00:14:31.019 fused_ordering(673) 00:14:31.019 fused_ordering(674) 00:14:31.019 fused_ordering(675) 00:14:31.019 fused_ordering(676) 00:14:31.019 fused_ordering(677) 00:14:31.019 fused_ordering(678) 00:14:31.019 fused_ordering(679) 00:14:31.019 fused_ordering(680) 00:14:31.019 fused_ordering(681) 00:14:31.019 fused_ordering(682) 00:14:31.019 fused_ordering(683) 00:14:31.019 fused_ordering(684) 00:14:31.019 fused_ordering(685) 00:14:31.019 fused_ordering(686) 00:14:31.019 fused_ordering(687) 00:14:31.019 fused_ordering(688) 00:14:31.019 fused_ordering(689) 00:14:31.019 fused_ordering(690) 00:14:31.019 fused_ordering(691) 00:14:31.019 fused_ordering(692) 00:14:31.019 fused_ordering(693) 00:14:31.019 fused_ordering(694) 00:14:31.019 fused_ordering(695) 00:14:31.019 fused_ordering(696) 00:14:31.019 fused_ordering(697) 00:14:31.019 fused_ordering(698) 00:14:31.019 fused_ordering(699) 00:14:31.019 fused_ordering(700) 00:14:31.019 fused_ordering(701) 00:14:31.019 fused_ordering(702) 00:14:31.019 fused_ordering(703) 00:14:31.019 fused_ordering(704) 00:14:31.019 fused_ordering(705) 00:14:31.019 fused_ordering(706) 00:14:31.019 fused_ordering(707) 00:14:31.019 fused_ordering(708) 00:14:31.019 fused_ordering(709) 00:14:31.019 fused_ordering(710) 00:14:31.019 fused_ordering(711) 00:14:31.019 fused_ordering(712) 00:14:31.019 fused_ordering(713) 00:14:31.019 fused_ordering(714) 00:14:31.019 fused_ordering(715) 00:14:31.019 fused_ordering(716) 00:14:31.019 fused_ordering(717) 00:14:31.019 fused_ordering(718) 00:14:31.019 fused_ordering(719) 00:14:31.019 fused_ordering(720) 00:14:31.019 fused_ordering(721) 00:14:31.019 fused_ordering(722) 00:14:31.019 fused_ordering(723) 00:14:31.019 fused_ordering(724) 00:14:31.019 fused_ordering(725) 00:14:31.019 fused_ordering(726) 00:14:31.019 fused_ordering(727) 00:14:31.019 fused_ordering(728) 00:14:31.019 fused_ordering(729) 00:14:31.019 fused_ordering(730) 00:14:31.019 fused_ordering(731) 00:14:31.019 fused_ordering(732) 00:14:31.019 fused_ordering(733) 00:14:31.019 fused_ordering(734) 00:14:31.019 fused_ordering(735) 00:14:31.019 fused_ordering(736) 00:14:31.019 fused_ordering(737) 00:14:31.019 fused_ordering(738) 00:14:31.019 fused_ordering(739) 00:14:31.019 fused_ordering(740) 00:14:31.019 fused_ordering(741) 00:14:31.019 fused_ordering(742) 00:14:31.019 fused_ordering(743) 00:14:31.019 fused_ordering(744) 00:14:31.019 fused_ordering(745) 00:14:31.019 fused_ordering(746) 00:14:31.019 fused_ordering(747) 00:14:31.019 fused_ordering(748) 00:14:31.019 fused_ordering(749) 00:14:31.019 fused_ordering(750) 00:14:31.019 fused_ordering(751) 00:14:31.019 fused_ordering(752) 00:14:31.019 fused_ordering(753) 00:14:31.019 fused_ordering(754) 00:14:31.019 fused_ordering(755) 00:14:31.019 fused_ordering(756) 00:14:31.019 fused_ordering(757) 00:14:31.019 fused_ordering(758) 00:14:31.019 fused_ordering(759) 00:14:31.019 fused_ordering(760) 00:14:31.019 fused_ordering(761) 00:14:31.019 fused_ordering(762) 00:14:31.019 fused_ordering(763) 00:14:31.019 fused_ordering(764) 00:14:31.019 fused_ordering(765) 00:14:31.019 fused_ordering(766) 00:14:31.019 fused_ordering(767) 00:14:31.019 fused_ordering(768) 00:14:31.019 fused_ordering(769) 00:14:31.019 fused_ordering(770) 00:14:31.019 fused_ordering(771) 00:14:31.019 fused_ordering(772) 00:14:31.019 fused_ordering(773) 00:14:31.019 fused_ordering(774) 00:14:31.019 fused_ordering(775) 00:14:31.019 fused_ordering(776) 00:14:31.019 fused_ordering(777) 00:14:31.019 fused_ordering(778) 00:14:31.019 fused_ordering(779) 00:14:31.019 fused_ordering(780) 00:14:31.019 fused_ordering(781) 00:14:31.019 fused_ordering(782) 00:14:31.019 fused_ordering(783) 00:14:31.019 fused_ordering(784) 00:14:31.019 fused_ordering(785) 00:14:31.019 fused_ordering(786) 00:14:31.020 fused_ordering(787) 00:14:31.020 fused_ordering(788) 00:14:31.020 fused_ordering(789) 00:14:31.020 fused_ordering(790) 00:14:31.020 fused_ordering(791) 00:14:31.020 fused_ordering(792) 00:14:31.020 fused_ordering(793) 00:14:31.020 fused_ordering(794) 00:14:31.020 fused_ordering(795) 00:14:31.020 fused_ordering(796) 00:14:31.020 fused_ordering(797) 00:14:31.020 fused_ordering(798) 00:14:31.020 fused_ordering(799) 00:14:31.020 fused_ordering(800) 00:14:31.020 fused_ordering(801) 00:14:31.020 fused_ordering(802) 00:14:31.020 fused_ordering(803) 00:14:31.020 fused_ordering(804) 00:14:31.020 fused_ordering(805) 00:14:31.020 fused_ordering(806) 00:14:31.020 fused_ordering(807) 00:14:31.020 fused_ordering(808) 00:14:31.020 fused_ordering(809) 00:14:31.020 fused_ordering(810) 00:14:31.020 fused_ordering(811) 00:14:31.020 fused_ordering(812) 00:14:31.020 fused_ordering(813) 00:14:31.020 fused_ordering(814) 00:14:31.020 fused_ordering(815) 00:14:31.020 fused_ordering(816) 00:14:31.020 fused_ordering(817) 00:14:31.020 fused_ordering(818) 00:14:31.020 fused_ordering(819) 00:14:31.020 fused_ordering(820) 00:14:31.957 fused_ordering(821) 00:14:31.957 fused_ordering(822) 00:14:31.957 fused_ordering(823) 00:14:31.957 fused_ordering(824) 00:14:31.957 fused_ordering(825) 00:14:31.957 fused_ordering(826) 00:14:31.957 fused_ordering(827) 00:14:31.957 fused_ordering(828) 00:14:31.957 fused_ordering(829) 00:14:31.957 fused_ordering(830) 00:14:31.957 fused_ordering(831) 00:14:31.957 fused_ordering(832) 00:14:31.957 fused_ordering(833) 00:14:31.957 fused_ordering(834) 00:14:31.957 fused_ordering(835) 00:14:31.957 fused_ordering(836) 00:14:31.957 fused_ordering(837) 00:14:31.957 fused_ordering(838) 00:14:31.957 fused_ordering(839) 00:14:31.957 fused_ordering(840) 00:14:31.957 fused_ordering(841) 00:14:31.957 fused_ordering(842) 00:14:31.957 fused_ordering(843) 00:14:31.957 fused_ordering(844) 00:14:31.957 fused_ordering(845) 00:14:31.957 fused_ordering(846) 00:14:31.957 fused_ordering(847) 00:14:31.957 fused_ordering(848) 00:14:31.957 fused_ordering(849) 00:14:31.957 fused_ordering(850) 00:14:31.957 fused_ordering(851) 00:14:31.957 fused_ordering(852) 00:14:31.957 fused_ordering(853) 00:14:31.957 fused_ordering(854) 00:14:31.957 fused_ordering(855) 00:14:31.957 fused_ordering(856) 00:14:31.957 fused_ordering(857) 00:14:31.957 fused_ordering(858) 00:14:31.957 fused_ordering(859) 00:14:31.957 fused_ordering(860) 00:14:31.957 fused_ordering(861) 00:14:31.957 fused_ordering(862) 00:14:31.957 fused_ordering(863) 00:14:31.957 fused_ordering(864) 00:14:31.957 fused_ordering(865) 00:14:31.957 fused_ordering(866) 00:14:31.957 fused_ordering(867) 00:14:31.957 fused_ordering(868) 00:14:31.957 fused_ordering(869) 00:14:31.957 fused_ordering(870) 00:14:31.957 fused_ordering(871) 00:14:31.957 fused_ordering(872) 00:14:31.957 fused_ordering(873) 00:14:31.957 fused_ordering(874) 00:14:31.957 fused_ordering(875) 00:14:31.957 fused_ordering(876) 00:14:31.957 fused_ordering(877) 00:14:31.957 fused_ordering(878) 00:14:31.957 fused_ordering(879) 00:14:31.957 fused_ordering(880) 00:14:31.957 fused_ordering(881) 00:14:31.957 fused_ordering(882) 00:14:31.957 fused_ordering(883) 00:14:31.957 fused_ordering(884) 00:14:31.957 fused_ordering(885) 00:14:31.957 fused_ordering(886) 00:14:31.957 fused_ordering(887) 00:14:31.957 fused_ordering(888) 00:14:31.957 fused_ordering(889) 00:14:31.957 fused_ordering(890) 00:14:31.957 fused_ordering(891) 00:14:31.957 fused_ordering(892) 00:14:31.957 fused_ordering(893) 00:14:31.957 fused_ordering(894) 00:14:31.957 fused_ordering(895) 00:14:31.957 fused_ordering(896) 00:14:31.957 fused_ordering(897) 00:14:31.957 fused_ordering(898) 00:14:31.957 fused_ordering(899) 00:14:31.957 fused_ordering(900) 00:14:31.957 fused_ordering(901) 00:14:31.957 fused_ordering(902) 00:14:31.957 fused_ordering(903) 00:14:31.957 fused_ordering(904) 00:14:31.957 fused_ordering(905) 00:14:31.957 fused_ordering(906) 00:14:31.957 fused_ordering(907) 00:14:31.957 fused_ordering(908) 00:14:31.957 fused_ordering(909) 00:14:31.957 fused_ordering(910) 00:14:31.957 fused_ordering(911) 00:14:31.957 fused_ordering(912) 00:14:31.957 fused_ordering(913) 00:14:31.957 fused_ordering(914) 00:14:31.957 fused_ordering(915) 00:14:31.957 fused_ordering(916) 00:14:31.957 fused_ordering(917) 00:14:31.957 fused_ordering(918) 00:14:31.957 fused_ordering(919) 00:14:31.957 fused_ordering(920) 00:14:31.957 fused_ordering(921) 00:14:31.957 fused_ordering(922) 00:14:31.957 fused_ordering(923) 00:14:31.957 fused_ordering(924) 00:14:31.957 fused_ordering(925) 00:14:31.957 fused_ordering(926) 00:14:31.957 fused_ordering(927) 00:14:31.957 fused_ordering(928) 00:14:31.957 fused_ordering(929) 00:14:31.957 fused_ordering(930) 00:14:31.957 fused_ordering(931) 00:14:31.957 fused_ordering(932) 00:14:31.957 fused_ordering(933) 00:14:31.957 fused_ordering(934) 00:14:31.957 fused_ordering(935) 00:14:31.957 fused_ordering(936) 00:14:31.957 fused_ordering(937) 00:14:31.957 fused_ordering(938) 00:14:31.957 fused_ordering(939) 00:14:31.957 fused_ordering(940) 00:14:31.957 fused_ordering(941) 00:14:31.957 fused_ordering(942) 00:14:31.957 fused_ordering(943) 00:14:31.957 fused_ordering(944) 00:14:31.957 fused_ordering(945) 00:14:31.958 fused_ordering(946) 00:14:31.958 fused_ordering(947) 00:14:31.958 fused_ordering(948) 00:14:31.958 fused_ordering(949) 00:14:31.958 fused_ordering(950) 00:14:31.958 fused_ordering(951) 00:14:31.958 fused_ordering(952) 00:14:31.958 fused_ordering(953) 00:14:31.958 fused_ordering(954) 00:14:31.958 fused_ordering(955) 00:14:31.958 fused_ordering(956) 00:14:31.958 fused_ordering(957) 00:14:31.958 fused_ordering(958) 00:14:31.958 fused_ordering(959) 00:14:31.958 fused_ordering(960) 00:14:31.958 fused_ordering(961) 00:14:31.958 fused_ordering(962) 00:14:31.958 fused_ordering(963) 00:14:31.958 fused_ordering(964) 00:14:31.958 fused_ordering(965) 00:14:31.958 fused_ordering(966) 00:14:31.958 fused_ordering(967) 00:14:31.958 fused_ordering(968) 00:14:31.958 fused_ordering(969) 00:14:31.958 fused_ordering(970) 00:14:31.958 fused_ordering(971) 00:14:31.958 fused_ordering(972) 00:14:31.958 fused_ordering(973) 00:14:31.958 fused_ordering(974) 00:14:31.958 fused_ordering(975) 00:14:31.958 fused_ordering(976) 00:14:31.958 fused_ordering(977) 00:14:31.958 fused_ordering(978) 00:14:31.958 fused_ordering(979) 00:14:31.958 fused_ordering(980) 00:14:31.958 fused_ordering(981) 00:14:31.958 fused_ordering(982) 00:14:31.958 fused_ordering(983) 00:14:31.958 fused_ordering(984) 00:14:31.958 fused_ordering(985) 00:14:31.958 fused_ordering(986) 00:14:31.958 fused_ordering(987) 00:14:31.958 fused_ordering(988) 00:14:31.958 fused_ordering(989) 00:14:31.958 fused_ordering(990) 00:14:31.958 fused_ordering(991) 00:14:31.958 fused_ordering(992) 00:14:31.958 fused_ordering(993) 00:14:31.958 fused_ordering(994) 00:14:31.958 fused_ordering(995) 00:14:31.958 fused_ordering(996) 00:14:31.958 fused_ordering(997) 00:14:31.958 fused_ordering(998) 00:14:31.958 fused_ordering(999) 00:14:31.958 fused_ordering(1000) 00:14:31.958 fused_ordering(1001) 00:14:31.958 fused_ordering(1002) 00:14:31.958 fused_ordering(1003) 00:14:31.958 fused_ordering(1004) 00:14:31.958 fused_ordering(1005) 00:14:31.958 fused_ordering(1006) 00:14:31.958 fused_ordering(1007) 00:14:31.958 fused_ordering(1008) 00:14:31.958 fused_ordering(1009) 00:14:31.958 fused_ordering(1010) 00:14:31.958 fused_ordering(1011) 00:14:31.958 fused_ordering(1012) 00:14:31.958 fused_ordering(1013) 00:14:31.958 fused_ordering(1014) 00:14:31.958 fused_ordering(1015) 00:14:31.958 fused_ordering(1016) 00:14:31.958 fused_ordering(1017) 00:14:31.958 fused_ordering(1018) 00:14:31.958 fused_ordering(1019) 00:14:31.958 fused_ordering(1020) 00:14:31.958 fused_ordering(1021) 00:14:31.958 fused_ordering(1022) 00:14:31.958 fused_ordering(1023) 00:14:31.958 03:02:27 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:31.958 03:02:27 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:31.958 03:02:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:31.958 03:02:27 -- nvmf/common.sh@116 -- # sync 00:14:31.958 03:02:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:31.958 03:02:27 -- nvmf/common.sh@119 -- # set +e 00:14:31.958 03:02:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:31.958 03:02:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:32.217 rmmod nvme_tcp 00:14:32.217 rmmod nvme_fabrics 00:14:32.217 rmmod nvme_keyring 00:14:32.217 03:02:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:32.218 03:02:27 -- nvmf/common.sh@123 -- # set -e 00:14:32.218 03:02:27 -- nvmf/common.sh@124 -- # return 0 00:14:32.218 03:02:27 -- nvmf/common.sh@477 -- # '[' -n 1967541 ']' 00:14:32.218 03:02:27 -- nvmf/common.sh@478 -- # killprocess 1967541 00:14:32.218 03:02:27 -- common/autotest_common.sh@926 -- # '[' -z 1967541 ']' 00:14:32.218 03:02:27 -- common/autotest_common.sh@930 -- # kill -0 1967541 00:14:32.218 03:02:27 -- common/autotest_common.sh@931 -- # uname 00:14:32.218 03:02:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:32.218 03:02:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1967541 00:14:32.218 03:02:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:32.218 03:02:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:32.218 03:02:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1967541' 00:14:32.218 killing process with pid 1967541 00:14:32.218 03:02:27 -- common/autotest_common.sh@945 -- # kill 1967541 00:14:32.218 03:02:27 -- common/autotest_common.sh@950 -- # wait 1967541 00:14:32.478 03:02:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:32.478 03:02:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:32.478 03:02:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:32.478 03:02:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:32.478 03:02:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:32.478 03:02:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:32.478 03:02:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:32.478 03:02:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.381 03:02:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:34.381 00:14:34.381 real 0m9.328s 00:14:34.381 user 0m7.085s 00:14:34.381 sys 0m4.465s 00:14:34.381 03:02:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.381 03:02:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.381 ************************************ 00:14:34.381 END TEST nvmf_fused_ordering 00:14:34.381 ************************************ 00:14:34.381 03:02:29 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:34.381 03:02:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.381 03:02:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.381 03:02:29 -- common/autotest_common.sh@10 -- # set +x 00:14:34.381 ************************************ 00:14:34.381 START TEST nvmf_delete_subsystem 00:14:34.381 ************************************ 00:14:34.381 03:02:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:34.381 * Looking for test storage... 00:14:34.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:34.640 03:02:29 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:34.640 03:02:29 -- nvmf/common.sh@7 -- # uname -s 00:14:34.640 03:02:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.640 03:02:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.640 03:02:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.640 03:02:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.640 03:02:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.640 03:02:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.640 03:02:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.640 03:02:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.640 03:02:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.640 03:02:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.640 03:02:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.640 03:02:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.640 03:02:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.640 03:02:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.640 03:02:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:34.640 03:02:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:34.640 03:02:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.640 03:02:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.640 03:02:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.640 03:02:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.640 03:02:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.640 03:02:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.640 03:02:29 -- paths/export.sh@5 -- # export PATH 00:14:34.640 03:02:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.640 03:02:29 -- nvmf/common.sh@46 -- # : 0 00:14:34.640 03:02:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.640 03:02:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.640 03:02:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.640 03:02:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.640 03:02:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.640 03:02:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.640 03:02:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.640 03:02:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.640 03:02:29 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:34.640 03:02:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.640 03:02:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.640 03:02:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.640 03:02:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.640 03:02:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.640 03:02:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.640 03:02:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.640 03:02:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.640 03:02:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:34.640 03:02:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:34.640 03:02:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:34.640 03:02:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.544 03:02:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:36.544 03:02:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:36.544 03:02:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:36.544 03:02:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:36.544 03:02:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:36.544 03:02:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:36.544 03:02:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:36.544 03:02:31 -- nvmf/common.sh@294 -- # net_devs=() 00:14:36.544 03:02:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:36.544 03:02:31 -- nvmf/common.sh@295 -- # e810=() 00:14:36.544 03:02:31 -- nvmf/common.sh@295 -- # local -ga e810 00:14:36.544 03:02:31 -- nvmf/common.sh@296 -- # x722=() 00:14:36.544 03:02:31 -- nvmf/common.sh@296 -- # local -ga x722 00:14:36.544 03:02:31 -- nvmf/common.sh@297 -- # mlx=() 00:14:36.544 03:02:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:36.544 03:02:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.544 03:02:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:36.544 03:02:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:36.544 03:02:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:36.544 03:02:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:36.544 03:02:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:36.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:36.544 03:02:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:36.544 03:02:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:36.544 03:02:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:36.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:36.545 03:02:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:36.545 03:02:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:36.545 03:02:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.545 03:02:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:36.545 03:02:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.545 03:02:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:36.545 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:36.545 03:02:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.545 03:02:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:36.545 03:02:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.545 03:02:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:36.545 03:02:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.545 03:02:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:36.545 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:36.545 03:02:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.545 03:02:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:36.545 03:02:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:36.545 03:02:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:36.545 03:02:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.545 03:02:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.545 03:02:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.545 03:02:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:36.545 03:02:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.545 03:02:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.545 03:02:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:36.545 03:02:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.545 03:02:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.545 03:02:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:36.545 03:02:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:36.545 03:02:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.545 03:02:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.545 03:02:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.545 03:02:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.545 03:02:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:36.545 03:02:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.545 03:02:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.545 03:02:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.545 03:02:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:36.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:14:36.545 00:14:36.545 --- 10.0.0.2 ping statistics --- 00:14:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.545 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:14:36.545 03:02:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:14:36.545 00:14:36.545 --- 10.0.0.1 ping statistics --- 00:14:36.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.545 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:36.545 03:02:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.545 03:02:31 -- nvmf/common.sh@410 -- # return 0 00:14:36.545 03:02:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:36.545 03:02:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.545 03:02:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:36.545 03:02:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.545 03:02:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:36.545 03:02:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:36.545 03:02:31 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:36.545 03:02:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:36.545 03:02:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:36.545 03:02:31 -- common/autotest_common.sh@10 -- # set +x 00:14:36.545 03:02:31 -- nvmf/common.sh@469 -- # nvmfpid=1970059 00:14:36.545 03:02:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:36.545 03:02:31 -- nvmf/common.sh@470 -- # waitforlisten 1970059 00:14:36.545 03:02:31 -- common/autotest_common.sh@819 -- # '[' -z 1970059 ']' 00:14:36.545 03:02:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.545 03:02:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:36.545 03:02:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.545 03:02:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:36.545 03:02:31 -- common/autotest_common.sh@10 -- # set +x 00:14:36.803 [2024-07-14 03:02:31.810172] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:36.804 [2024-07-14 03:02:31.810237] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.804 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.804 [2024-07-14 03:02:31.881353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.804 [2024-07-14 03:02:31.974669] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:36.804 [2024-07-14 03:02:31.974843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.804 [2024-07-14 03:02:31.974862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.804 [2024-07-14 03:02:31.974887] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.804 [2024-07-14 03:02:31.978893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.804 [2024-07-14 03:02:31.978921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:37.741 03:02:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:37.741 03:02:32 -- common/autotest_common.sh@852 -- # return 0 00:14:37.741 03:02:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:37.741 03:02:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 03:02:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 [2024-07-14 03:02:32.847335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 [2024-07-14 03:02:32.863670] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 NULL1 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 Delay0 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.741 03:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.741 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:14:37.741 03:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@28 -- # perf_pid=1970220 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:37.741 03:02:32 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:37.741 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.741 [2024-07-14 03:02:32.938301] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:40.267 03:02:34 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:40.267 03:02:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.267 03:02:34 -- common/autotest_common.sh@10 -- # set +x 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 [2024-07-14 03:02:35.111343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe060 is same with the state(5) to be set 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Read completed with error (sct=0, sc=8) 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.267 starting I/O failed: -6 00:14:40.267 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 starting I/O failed: -6 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 [2024-07-14 03:02:35.112150] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d24000c00 is same with the state(5) to be set 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.268 Read completed with error (sct=0, sc=8) 00:14:40.268 Write completed with error (sct=0, sc=8) 00:14:40.837 [2024-07-14 03:02:36.079217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcc15e0 is same with the state(5) to be set 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 [2024-07-14 03:02:36.112663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbdd30 is same with the state(5) to be set 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 [2024-07-14 03:02:36.113115] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcbe310 is same with the state(5) to be set 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 [2024-07-14 03:02:36.114332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d2400c600 is same with the state(5) to be set 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Read completed with error (sct=0, sc=8) 00:14:41.096 Write completed with error (sct=0, sc=8) 00:14:41.096 [2024-07-14 03:02:36.114519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f2d2400bf20 is same with the state(5) to be set 00:14:41.096 [2024-07-14 03:02:36.115221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcc15e0 (9): Bad file descriptor 00:14:41.096 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:41.096 03:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.096 03:02:36 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:41.096 03:02:36 -- target/delete_subsystem.sh@35 -- # kill -0 1970220 00:14:41.096 03:02:36 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:41.096 Initializing NVMe Controllers 00:14:41.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.096 Controller IO queue size 128, less than required. 00:14:41.096 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:41.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:41.096 Initialization complete. Launching workers. 00:14:41.096 ======================================================== 00:14:41.097 Latency(us) 00:14:41.097 Device Information : IOPS MiB/s Average min max 00:14:41.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.62 0.09 881944.56 619.26 1013026.94 00:14:41.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.62 0.09 882310.10 381.34 1014562.66 00:14:41.097 ======================================================== 00:14:41.097 Total : 353.24 0.17 882127.33 381.34 1014562.66 00:14:41.097 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@35 -- # kill -0 1970220 00:14:41.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1970220) - No such process 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@45 -- # NOT wait 1970220 00:14:41.665 03:02:36 -- common/autotest_common.sh@640 -- # local es=0 00:14:41.665 03:02:36 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1970220 00:14:41.665 03:02:36 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:41.665 03:02:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:41.665 03:02:36 -- common/autotest_common.sh@632 -- # type -t wait 00:14:41.665 03:02:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:41.665 03:02:36 -- common/autotest_common.sh@643 -- # wait 1970220 00:14:41.665 03:02:36 -- common/autotest_common.sh@643 -- # es=1 00:14:41.665 03:02:36 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:41.665 03:02:36 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:41.665 03:02:36 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:41.665 03:02:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.665 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:14:41.665 03:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.665 03:02:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.665 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:14:41.665 [2024-07-14 03:02:36.635014] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.665 03:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.665 03:02:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.665 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:14:41.665 03:02:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@54 -- # perf_pid=1970666 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:41.665 03:02:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.665 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.665 [2024-07-14 03:02:36.690899] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.925 03:02:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.925 03:02:37 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:41.925 03:02:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.494 03:02:37 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.494 03:02:37 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:42.494 03:02:37 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.061 03:02:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.061 03:02:38 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:43.061 03:02:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.628 03:02:38 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.628 03:02:38 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:43.628 03:02:38 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.197 03:02:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.197 03:02:39 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:44.197 03:02:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.455 03:02:39 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.455 03:02:39 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:44.455 03:02:39 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.715 Initializing NVMe Controllers 00:14:44.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.715 Controller IO queue size 128, less than required. 00:14:44.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:44.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:44.715 Initialization complete. Launching workers. 00:14:44.715 ======================================================== 00:14:44.715 Latency(us) 00:14:44.715 Device Information : IOPS MiB/s Average min max 00:14:44.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003724.54 1000231.20 1041640.17 00:14:44.715 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004621.64 1000257.62 1012938.65 00:14:44.715 ======================================================== 00:14:44.715 Total : 256.00 0.12 1004173.09 1000231.20 1041640.17 00:14:44.715 00:14:44.973 03:02:40 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.974 03:02:40 -- target/delete_subsystem.sh@57 -- # kill -0 1970666 00:14:44.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1970666) - No such process 00:14:44.974 03:02:40 -- target/delete_subsystem.sh@67 -- # wait 1970666 00:14:44.974 03:02:40 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:44.974 03:02:40 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:44.974 03:02:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:44.974 03:02:40 -- nvmf/common.sh@116 -- # sync 00:14:44.974 03:02:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:44.974 03:02:40 -- nvmf/common.sh@119 -- # set +e 00:14:44.974 03:02:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:44.974 03:02:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:44.974 rmmod nvme_tcp 00:14:44.974 rmmod nvme_fabrics 00:14:44.974 rmmod nvme_keyring 00:14:45.233 03:02:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:45.233 03:02:40 -- nvmf/common.sh@123 -- # set -e 00:14:45.233 03:02:40 -- nvmf/common.sh@124 -- # return 0 00:14:45.233 03:02:40 -- nvmf/common.sh@477 -- # '[' -n 1970059 ']' 00:14:45.233 03:02:40 -- nvmf/common.sh@478 -- # killprocess 1970059 00:14:45.233 03:02:40 -- common/autotest_common.sh@926 -- # '[' -z 1970059 ']' 00:14:45.233 03:02:40 -- common/autotest_common.sh@930 -- # kill -0 1970059 00:14:45.233 03:02:40 -- common/autotest_common.sh@931 -- # uname 00:14:45.233 03:02:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:45.233 03:02:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1970059 00:14:45.233 03:02:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:45.233 03:02:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:45.233 03:02:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1970059' 00:14:45.233 killing process with pid 1970059 00:14:45.233 03:02:40 -- common/autotest_common.sh@945 -- # kill 1970059 00:14:45.233 03:02:40 -- common/autotest_common.sh@950 -- # wait 1970059 00:14:45.493 03:02:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:45.493 03:02:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:45.493 03:02:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:45.493 03:02:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.493 03:02:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:45.493 03:02:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.493 03:02:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.493 03:02:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.441 03:02:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:47.441 00:14:47.441 real 0m12.952s 00:14:47.441 user 0m29.566s 00:14:47.441 sys 0m2.861s 00:14:47.441 03:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:47.441 03:02:42 -- common/autotest_common.sh@10 -- # set +x 00:14:47.441 ************************************ 00:14:47.441 END TEST nvmf_delete_subsystem 00:14:47.441 ************************************ 00:14:47.441 03:02:42 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:47.441 03:02:42 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.441 03:02:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:47.441 03:02:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:47.441 03:02:42 -- common/autotest_common.sh@10 -- # set +x 00:14:47.441 ************************************ 00:14:47.441 START TEST nvmf_nvme_cli 00:14:47.441 ************************************ 00:14:47.441 03:02:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:47.441 * Looking for test storage... 00:14:47.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:47.441 03:02:42 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.441 03:02:42 -- nvmf/common.sh@7 -- # uname -s 00:14:47.441 03:02:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.441 03:02:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.441 03:02:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.441 03:02:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.441 03:02:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.441 03:02:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.441 03:02:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.441 03:02:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.441 03:02:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.441 03:02:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.441 03:02:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.441 03:02:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:47.441 03:02:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.441 03:02:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.441 03:02:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:47.441 03:02:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:47.441 03:02:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.441 03:02:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.441 03:02:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.441 03:02:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.441 03:02:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.441 03:02:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.441 03:02:42 -- paths/export.sh@5 -- # export PATH 00:14:47.441 03:02:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.441 03:02:42 -- nvmf/common.sh@46 -- # : 0 00:14:47.441 03:02:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:47.441 03:02:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:47.441 03:02:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:47.441 03:02:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.441 03:02:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.441 03:02:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:47.441 03:02:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:47.441 03:02:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:47.441 03:02:42 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:47.441 03:02:42 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:47.441 03:02:42 -- target/nvme_cli.sh@14 -- # devs=() 00:14:47.441 03:02:42 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:47.441 03:02:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:47.441 03:02:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.441 03:02:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:47.441 03:02:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:47.441 03:02:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:47.441 03:02:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.441 03:02:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.441 03:02:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.441 03:02:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:47.441 03:02:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:47.441 03:02:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:47.441 03:02:42 -- common/autotest_common.sh@10 -- # set +x 00:14:49.346 03:02:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:49.346 03:02:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:49.346 03:02:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:49.346 03:02:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:49.346 03:02:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:49.346 03:02:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:49.346 03:02:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:49.346 03:02:44 -- nvmf/common.sh@294 -- # net_devs=() 00:14:49.346 03:02:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:49.346 03:02:44 -- nvmf/common.sh@295 -- # e810=() 00:14:49.346 03:02:44 -- nvmf/common.sh@295 -- # local -ga e810 00:14:49.346 03:02:44 -- nvmf/common.sh@296 -- # x722=() 00:14:49.346 03:02:44 -- nvmf/common.sh@296 -- # local -ga x722 00:14:49.346 03:02:44 -- nvmf/common.sh@297 -- # mlx=() 00:14:49.346 03:02:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:49.346 03:02:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:49.346 03:02:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:49.346 03:02:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:49.346 03:02:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.346 03:02:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:49.346 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:49.346 03:02:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:49.346 03:02:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:49.346 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:49.346 03:02:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.346 03:02:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.346 03:02:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.346 03:02:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:49.346 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:49.346 03:02:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.346 03:02:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:49.346 03:02:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:49.346 03:02:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:49.346 03:02:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:49.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:49.346 03:02:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:49.346 03:02:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:49.346 03:02:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:49.346 03:02:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:49.346 03:02:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:49.346 03:02:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:49.346 03:02:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:49.346 03:02:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:49.346 03:02:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:49.346 03:02:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:49.346 03:02:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:49.346 03:02:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:49.346 03:02:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:49.346 03:02:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:49.346 03:02:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:49.346 03:02:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:49.346 03:02:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:49.346 03:02:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:49.346 03:02:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:49.346 03:02:44 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:49.346 03:02:44 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:49.607 03:02:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:49.607 03:02:44 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:49.607 03:02:44 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:49.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:49.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:14:49.607 00:14:49.607 --- 10.0.0.2 ping statistics --- 00:14:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.607 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:49.607 03:02:44 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:49.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:49.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:14:49.607 00:14:49.607 --- 10.0.0.1 ping statistics --- 00:14:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:49.607 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:49.607 03:02:44 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:49.607 03:02:44 -- nvmf/common.sh@410 -- # return 0 00:14:49.607 03:02:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:49.607 03:02:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:49.607 03:02:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:49.607 03:02:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:49.607 03:02:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:49.607 03:02:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:49.607 03:02:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:49.607 03:02:44 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:49.607 03:02:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:49.607 03:02:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:49.607 03:02:44 -- common/autotest_common.sh@10 -- # set +x 00:14:49.607 03:02:44 -- nvmf/common.sh@469 -- # nvmfpid=1973118 00:14:49.607 03:02:44 -- nvmf/common.sh@470 -- # waitforlisten 1973118 00:14:49.607 03:02:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:49.607 03:02:44 -- common/autotest_common.sh@819 -- # '[' -z 1973118 ']' 00:14:49.607 03:02:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:49.607 03:02:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.607 03:02:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:49.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:49.607 03:02:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.607 03:02:44 -- common/autotest_common.sh@10 -- # set +x 00:14:49.607 [2024-07-14 03:02:44.733412] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:49.607 [2024-07-14 03:02:44.733497] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:49.607 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.607 [2024-07-14 03:02:44.805350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:49.865 [2024-07-14 03:02:44.893322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:49.865 [2024-07-14 03:02:44.893475] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:49.865 [2024-07-14 03:02:44.893493] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:49.865 [2024-07-14 03:02:44.893505] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:49.865 [2024-07-14 03:02:44.893564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:49.865 [2024-07-14 03:02:44.893685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:49.865 [2024-07-14 03:02:44.893739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.865 [2024-07-14 03:02:44.893737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.433 03:02:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:50.433 03:02:45 -- common/autotest_common.sh@852 -- # return 0 00:14:50.433 03:02:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:50.433 03:02:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:50.433 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 03:02:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.694 03:02:45 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 [2024-07-14 03:02:45.700513] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 Malloc0 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 Malloc1 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 [2024-07-14 03:02:45.783817] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:50.694 03:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:50.694 03:02:45 -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 03:02:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:50.694 03:02:45 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:50.694 00:14:50.694 Discovery Log Number of Records 2, Generation counter 2 00:14:50.694 =====Discovery Log Entry 0====== 00:14:50.694 trtype: tcp 00:14:50.694 adrfam: ipv4 00:14:50.694 subtype: current discovery subsystem 00:14:50.694 treq: not required 00:14:50.694 portid: 0 00:14:50.694 trsvcid: 4420 00:14:50.694 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:50.694 traddr: 10.0.0.2 00:14:50.694 eflags: explicit discovery connections, duplicate discovery information 00:14:50.694 sectype: none 00:14:50.694 =====Discovery Log Entry 1====== 00:14:50.694 trtype: tcp 00:14:50.694 adrfam: ipv4 00:14:50.694 subtype: nvme subsystem 00:14:50.694 treq: not required 00:14:50.694 portid: 0 00:14:50.694 trsvcid: 4420 00:14:50.694 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:50.694 traddr: 10.0.0.2 00:14:50.694 eflags: none 00:14:50.694 sectype: none 00:14:50.694 03:02:45 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:50.694 03:02:45 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:50.694 03:02:45 -- nvmf/common.sh@510 -- # local dev _ 00:14:50.694 03:02:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:50.694 03:02:45 -- nvmf/common.sh@509 -- # nvme list 00:14:50.694 03:02:45 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:50.694 03:02:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:50.694 03:02:45 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:50.694 03:02:45 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:50.694 03:02:45 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:50.694 03:02:45 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.631 03:02:46 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:51.631 03:02:46 -- common/autotest_common.sh@1177 -- # local i=0 00:14:51.631 03:02:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.631 03:02:46 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:14:51.631 03:02:46 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:14:51.631 03:02:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:53.539 03:02:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:53.539 03:02:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:53.539 03:02:48 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:53.539 03:02:48 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:14:53.539 03:02:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:53.539 03:02:48 -- common/autotest_common.sh@1187 -- # return 0 00:14:53.539 03:02:48 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:53.539 03:02:48 -- nvmf/common.sh@510 -- # local dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@509 -- # nvme list 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:53.539 /dev/nvme0n1 ]] 00:14:53.539 03:02:48 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:53.539 03:02:48 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:53.539 03:02:48 -- nvmf/common.sh@510 -- # local dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@509 -- # nvme list 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:53.539 03:02:48 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:53.539 03:02:48 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:53.539 03:02:48 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:53.539 03:02:48 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:53.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.539 03:02:48 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:53.539 03:02:48 -- common/autotest_common.sh@1198 -- # local i=0 00:14:53.539 03:02:48 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:53.539 03:02:48 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.539 03:02:48 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:53.539 03:02:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:53.539 03:02:48 -- common/autotest_common.sh@1210 -- # return 0 00:14:53.539 03:02:48 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:53.539 03:02:48 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.539 03:02:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.539 03:02:48 -- common/autotest_common.sh@10 -- # set +x 00:14:53.539 03:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.539 03:02:48 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:53.539 03:02:48 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:53.539 03:02:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.539 03:02:48 -- nvmf/common.sh@116 -- # sync 00:14:53.539 03:02:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:53.539 03:02:48 -- nvmf/common.sh@119 -- # set +e 00:14:53.539 03:02:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.539 03:02:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:53.539 rmmod nvme_tcp 00:14:53.539 rmmod nvme_fabrics 00:14:53.539 rmmod nvme_keyring 00:14:53.539 03:02:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.539 03:02:48 -- nvmf/common.sh@123 -- # set -e 00:14:53.539 03:02:48 -- nvmf/common.sh@124 -- # return 0 00:14:53.539 03:02:48 -- nvmf/common.sh@477 -- # '[' -n 1973118 ']' 00:14:53.539 03:02:48 -- nvmf/common.sh@478 -- # killprocess 1973118 00:14:53.539 03:02:48 -- common/autotest_common.sh@926 -- # '[' -z 1973118 ']' 00:14:53.539 03:02:48 -- common/autotest_common.sh@930 -- # kill -0 1973118 00:14:53.539 03:02:48 -- common/autotest_common.sh@931 -- # uname 00:14:53.539 03:02:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.539 03:02:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1973118 00:14:53.539 03:02:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:53.539 03:02:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:53.539 03:02:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1973118' 00:14:53.539 killing process with pid 1973118 00:14:53.539 03:02:48 -- common/autotest_common.sh@945 -- # kill 1973118 00:14:53.539 03:02:48 -- common/autotest_common.sh@950 -- # wait 1973118 00:14:54.106 03:02:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:54.106 03:02:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:54.106 03:02:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:54.106 03:02:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:54.106 03:02:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:54.106 03:02:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.106 03:02:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.106 03:02:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:56.008 03:02:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:56.009 00:14:56.009 real 0m8.531s 00:14:56.009 user 0m17.145s 00:14:56.009 sys 0m2.172s 00:14:56.009 03:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.009 03:02:51 -- common/autotest_common.sh@10 -- # set +x 00:14:56.009 ************************************ 00:14:56.009 END TEST nvmf_nvme_cli 00:14:56.009 ************************************ 00:14:56.009 03:02:51 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:14:56.009 03:02:51 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.009 03:02:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:56.009 03:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.009 03:02:51 -- common/autotest_common.sh@10 -- # set +x 00:14:56.009 ************************************ 00:14:56.009 START TEST nvmf_vfio_user 00:14:56.009 ************************************ 00:14:56.009 03:02:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:56.009 * Looking for test storage... 00:14:56.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:56.009 03:02:51 -- nvmf/common.sh@7 -- # uname -s 00:14:56.009 03:02:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:56.009 03:02:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:56.009 03:02:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:56.009 03:02:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:56.009 03:02:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:56.009 03:02:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:56.009 03:02:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:56.009 03:02:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:56.009 03:02:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:56.009 03:02:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:56.009 03:02:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.009 03:02:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:56.009 03:02:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:56.009 03:02:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:56.009 03:02:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:56.009 03:02:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:56.009 03:02:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:56.009 03:02:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:56.009 03:02:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:56.009 03:02:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.009 03:02:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.009 03:02:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.009 03:02:51 -- paths/export.sh@5 -- # export PATH 00:14:56.009 03:02:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:56.009 03:02:51 -- nvmf/common.sh@46 -- # : 0 00:14:56.009 03:02:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:56.009 03:02:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:56.009 03:02:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:56.009 03:02:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:56.009 03:02:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:56.009 03:02:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:56.009 03:02:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:56.009 03:02:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1973950 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1973950' 00:14:56.009 Process pid: 1973950 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:56.009 03:02:51 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1973950 00:14:56.009 03:02:51 -- common/autotest_common.sh@819 -- # '[' -z 1973950 ']' 00:14:56.009 03:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.009 03:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.009 03:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.009 03:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.009 03:02:51 -- common/autotest_common.sh@10 -- # set +x 00:14:56.009 [2024-07-14 03:02:51.241374] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:56.009 [2024-07-14 03:02:51.241456] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.266 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.266 [2024-07-14 03:02:51.304507] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.266 [2024-07-14 03:02:51.393553] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.266 [2024-07-14 03:02:51.393730] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.266 [2024-07-14 03:02:51.393749] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.266 [2024-07-14 03:02:51.393761] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.266 [2024-07-14 03:02:51.394890] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.266 [2024-07-14 03:02:51.394917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.266 [2024-07-14 03:02:51.394968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.266 [2024-07-14 03:02:51.394971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.201 03:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.201 03:02:52 -- common/autotest_common.sh@852 -- # return 0 00:14:57.201 03:02:52 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:58.138 03:02:53 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:58.397 03:02:53 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:58.397 03:02:53 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:58.397 03:02:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:58.397 03:02:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:58.397 03:02:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:58.655 Malloc1 00:14:58.655 03:02:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:58.912 03:02:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:59.169 03:02:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:59.427 03:02:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:59.427 03:02:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:59.427 03:02:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:59.685 Malloc2 00:14:59.685 03:02:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:59.943 03:02:54 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:00.203 03:02:55 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:00.203 03:02:55 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:00.203 03:02:55 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:00.464 03:02:55 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:00.464 03:02:55 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.464 03:02:55 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.464 03:02:55 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:00.464 [2024-07-14 03:02:55.477286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:00.464 [2024-07-14 03:02:55.477327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974509 ] 00:15:00.464 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.464 [2024-07-14 03:02:55.510194] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:00.464 [2024-07-14 03:02:55.519284] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.464 [2024-07-14 03:02:55.519314] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fcbf67c6000 00:15:00.464 [2024-07-14 03:02:55.520278] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.521265] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.522271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.523281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.524287] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.525290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.526298] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.527308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:00.464 [2024-07-14 03:02:55.528314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:00.464 [2024-07-14 03:02:55.528334] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fcbf557c000 00:15:00.464 [2024-07-14 03:02:55.529492] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.464 [2024-07-14 03:02:55.544501] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:00.464 [2024-07-14 03:02:55.544538] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:00.464 [2024-07-14 03:02:55.549447] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.464 [2024-07-14 03:02:55.549495] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:00.464 [2024-07-14 03:02:55.549586] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:00.464 [2024-07-14 03:02:55.549623] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:00.464 [2024-07-14 03:02:55.549634] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:00.464 [2024-07-14 03:02:55.550437] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:00.464 [2024-07-14 03:02:55.550457] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:00.464 [2024-07-14 03:02:55.550469] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:00.464 [2024-07-14 03:02:55.551445] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:00.464 [2024-07-14 03:02:55.551462] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:00.464 [2024-07-14 03:02:55.551475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.552450] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:00.464 [2024-07-14 03:02:55.552468] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.553458] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:00.464 [2024-07-14 03:02:55.553477] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:00.464 [2024-07-14 03:02:55.553486] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.553497] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.553607] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:00.464 [2024-07-14 03:02:55.553614] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.553623] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:00.464 [2024-07-14 03:02:55.554463] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:00.464 [2024-07-14 03:02:55.555464] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:00.464 [2024-07-14 03:02:55.556475] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.464 [2024-07-14 03:02:55.557518] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:00.464 [2024-07-14 03:02:55.558486] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:00.465 [2024-07-14 03:02:55.558503] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:00.465 [2024-07-14 03:02:55.558512] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558540] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:00.465 [2024-07-14 03:02:55.558554] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558575] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.465 [2024-07-14 03:02:55.558585] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.465 [2024-07-14 03:02:55.558604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.558693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.558708] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:00.465 [2024-07-14 03:02:55.558717] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:00.465 [2024-07-14 03:02:55.558724] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:00.465 [2024-07-14 03:02:55.558731] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:00.465 [2024-07-14 03:02:55.558739] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:00.465 [2024-07-14 03:02:55.558746] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:00.465 [2024-07-14 03:02:55.558754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558771] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.558803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.558823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.465 [2024-07-14 03:02:55.558836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.465 [2024-07-14 03:02:55.558861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.465 [2024-07-14 03:02:55.558881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:00.465 [2024-07-14 03:02:55.558891] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558907] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.558934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.558944] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:00.465 [2024-07-14 03:02:55.558957] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558969] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558985] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.558999] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559091] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559105] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:00.465 [2024-07-14 03:02:55.559113] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:00.465 [2024-07-14 03:02:55.559123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559173] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:00.465 [2024-07-14 03:02:55.559193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559207] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559219] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.465 [2024-07-14 03:02:55.559242] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.465 [2024-07-14 03:02:55.559252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559300] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559313] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559325] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:00.465 [2024-07-14 03:02:55.559332] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.465 [2024-07-14 03:02:55.559342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559368] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559379] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559395] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559405] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559414] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559422] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:00.465 [2024-07-14 03:02:55.559430] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:00.465 [2024-07-14 03:02:55.559438] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:00.465 [2024-07-14 03:02:55.559464] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559501] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559585] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:00.465 [2024-07-14 03:02:55.559594] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:00.465 [2024-07-14 03:02:55.559600] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:00.465 [2024-07-14 03:02:55.559606] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:00.465 [2024-07-14 03:02:55.559615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:00.465 [2024-07-14 03:02:55.559626] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:00.465 [2024-07-14 03:02:55.559634] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:00.465 [2024-07-14 03:02:55.559643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559654] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:00.465 [2024-07-14 03:02:55.559661] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:00.465 [2024-07-14 03:02:55.559670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559681] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:00.465 [2024-07-14 03:02:55.559689] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:00.465 [2024-07-14 03:02:55.559701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:00.465 [2024-07-14 03:02:55.559713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:00.465 [2024-07-14 03:02:55.559760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:00.465 ===================================================== 00:15:00.466 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.466 ===================================================== 00:15:00.466 Controller Capabilities/Features 00:15:00.466 ================================ 00:15:00.466 Vendor ID: 4e58 00:15:00.466 Subsystem Vendor ID: 4e58 00:15:00.466 Serial Number: SPDK1 00:15:00.466 Model Number: SPDK bdev Controller 00:15:00.466 Firmware Version: 24.01.1 00:15:00.466 Recommended Arb Burst: 6 00:15:00.466 IEEE OUI Identifier: 8d 6b 50 00:15:00.466 Multi-path I/O 00:15:00.466 May have multiple subsystem ports: Yes 00:15:00.466 May have multiple controllers: Yes 00:15:00.466 Associated with SR-IOV VF: No 00:15:00.466 Max Data Transfer Size: 131072 00:15:00.466 Max Number of Namespaces: 32 00:15:00.466 Max Number of I/O Queues: 127 00:15:00.466 NVMe Specification Version (VS): 1.3 00:15:00.466 NVMe Specification Version (Identify): 1.3 00:15:00.466 Maximum Queue Entries: 256 00:15:00.466 Contiguous Queues Required: Yes 00:15:00.466 Arbitration Mechanisms Supported 00:15:00.466 Weighted Round Robin: Not Supported 00:15:00.466 Vendor Specific: Not Supported 00:15:00.466 Reset Timeout: 15000 ms 00:15:00.466 Doorbell Stride: 4 bytes 00:15:00.466 NVM Subsystem Reset: Not Supported 00:15:00.466 Command Sets Supported 00:15:00.466 NVM Command Set: Supported 00:15:00.466 Boot Partition: Not Supported 00:15:00.466 Memory Page Size Minimum: 4096 bytes 00:15:00.466 Memory Page Size Maximum: 4096 bytes 00:15:00.466 Persistent Memory Region: Not Supported 00:15:00.466 Optional Asynchronous Events Supported 00:15:00.466 Namespace Attribute Notices: Supported 00:15:00.466 Firmware Activation Notices: Not Supported 00:15:00.466 ANA Change Notices: Not Supported 00:15:00.466 PLE Aggregate Log Change Notices: Not Supported 00:15:00.466 LBA Status Info Alert Notices: Not Supported 00:15:00.466 EGE Aggregate Log Change Notices: Not Supported 00:15:00.466 Normal NVM Subsystem Shutdown event: Not Supported 00:15:00.466 Zone Descriptor Change Notices: Not Supported 00:15:00.466 Discovery Log Change Notices: Not Supported 00:15:00.466 Controller Attributes 00:15:00.466 128-bit Host Identifier: Supported 00:15:00.466 Non-Operational Permissive Mode: Not Supported 00:15:00.466 NVM Sets: Not Supported 00:15:00.466 Read Recovery Levels: Not Supported 00:15:00.466 Endurance Groups: Not Supported 00:15:00.466 Predictable Latency Mode: Not Supported 00:15:00.466 Traffic Based Keep ALive: Not Supported 00:15:00.466 Namespace Granularity: Not Supported 00:15:00.466 SQ Associations: Not Supported 00:15:00.466 UUID List: Not Supported 00:15:00.466 Multi-Domain Subsystem: Not Supported 00:15:00.466 Fixed Capacity Management: Not Supported 00:15:00.466 Variable Capacity Management: Not Supported 00:15:00.466 Delete Endurance Group: Not Supported 00:15:00.466 Delete NVM Set: Not Supported 00:15:00.466 Extended LBA Formats Supported: Not Supported 00:15:00.466 Flexible Data Placement Supported: Not Supported 00:15:00.466 00:15:00.466 Controller Memory Buffer Support 00:15:00.466 ================================ 00:15:00.466 Supported: No 00:15:00.466 00:15:00.466 Persistent Memory Region Support 00:15:00.466 ================================ 00:15:00.466 Supported: No 00:15:00.466 00:15:00.466 Admin Command Set Attributes 00:15:00.466 ============================ 00:15:00.466 Security Send/Receive: Not Supported 00:15:00.466 Format NVM: Not Supported 00:15:00.466 Firmware Activate/Download: Not Supported 00:15:00.466 Namespace Management: Not Supported 00:15:00.466 Device Self-Test: Not Supported 00:15:00.466 Directives: Not Supported 00:15:00.466 NVMe-MI: Not Supported 00:15:00.466 Virtualization Management: Not Supported 00:15:00.466 Doorbell Buffer Config: Not Supported 00:15:00.466 Get LBA Status Capability: Not Supported 00:15:00.466 Command & Feature Lockdown Capability: Not Supported 00:15:00.466 Abort Command Limit: 4 00:15:00.466 Async Event Request Limit: 4 00:15:00.466 Number of Firmware Slots: N/A 00:15:00.466 Firmware Slot 1 Read-Only: N/A 00:15:00.466 Firmware Activation Without Reset: N/A 00:15:00.466 Multiple Update Detection Support: N/A 00:15:00.466 Firmware Update Granularity: No Information Provided 00:15:00.466 Per-Namespace SMART Log: No 00:15:00.466 Asymmetric Namespace Access Log Page: Not Supported 00:15:00.466 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:00.466 Command Effects Log Page: Supported 00:15:00.466 Get Log Page Extended Data: Supported 00:15:00.466 Telemetry Log Pages: Not Supported 00:15:00.466 Persistent Event Log Pages: Not Supported 00:15:00.466 Supported Log Pages Log Page: May Support 00:15:00.466 Commands Supported & Effects Log Page: Not Supported 00:15:00.466 Feature Identifiers & Effects Log Page:May Support 00:15:00.466 NVMe-MI Commands & Effects Log Page: May Support 00:15:00.466 Data Area 4 for Telemetry Log: Not Supported 00:15:00.466 Error Log Page Entries Supported: 128 00:15:00.466 Keep Alive: Supported 00:15:00.466 Keep Alive Granularity: 10000 ms 00:15:00.466 00:15:00.466 NVM Command Set Attributes 00:15:00.466 ========================== 00:15:00.466 Submission Queue Entry Size 00:15:00.466 Max: 64 00:15:00.466 Min: 64 00:15:00.466 Completion Queue Entry Size 00:15:00.466 Max: 16 00:15:00.466 Min: 16 00:15:00.466 Number of Namespaces: 32 00:15:00.466 Compare Command: Supported 00:15:00.466 Write Uncorrectable Command: Not Supported 00:15:00.466 Dataset Management Command: Supported 00:15:00.466 Write Zeroes Command: Supported 00:15:00.466 Set Features Save Field: Not Supported 00:15:00.466 Reservations: Not Supported 00:15:00.466 Timestamp: Not Supported 00:15:00.466 Copy: Supported 00:15:00.466 Volatile Write Cache: Present 00:15:00.466 Atomic Write Unit (Normal): 1 00:15:00.466 Atomic Write Unit (PFail): 1 00:15:00.466 Atomic Compare & Write Unit: 1 00:15:00.466 Fused Compare & Write: Supported 00:15:00.466 Scatter-Gather List 00:15:00.466 SGL Command Set: Supported (Dword aligned) 00:15:00.466 SGL Keyed: Not Supported 00:15:00.466 SGL Bit Bucket Descriptor: Not Supported 00:15:00.466 SGL Metadata Pointer: Not Supported 00:15:00.466 Oversized SGL: Not Supported 00:15:00.466 SGL Metadata Address: Not Supported 00:15:00.466 SGL Offset: Not Supported 00:15:00.466 Transport SGL Data Block: Not Supported 00:15:00.466 Replay Protected Memory Block: Not Supported 00:15:00.466 00:15:00.466 Firmware Slot Information 00:15:00.466 ========================= 00:15:00.466 Active slot: 1 00:15:00.466 Slot 1 Firmware Revision: 24.01.1 00:15:00.466 00:15:00.466 00:15:00.466 Commands Supported and Effects 00:15:00.466 ============================== 00:15:00.466 Admin Commands 00:15:00.466 -------------- 00:15:00.466 Get Log Page (02h): Supported 00:15:00.466 Identify (06h): Supported 00:15:00.466 Abort (08h): Supported 00:15:00.466 Set Features (09h): Supported 00:15:00.466 Get Features (0Ah): Supported 00:15:00.466 Asynchronous Event Request (0Ch): Supported 00:15:00.466 Keep Alive (18h): Supported 00:15:00.466 I/O Commands 00:15:00.466 ------------ 00:15:00.466 Flush (00h): Supported LBA-Change 00:15:00.466 Write (01h): Supported LBA-Change 00:15:00.466 Read (02h): Supported 00:15:00.466 Compare (05h): Supported 00:15:00.466 Write Zeroes (08h): Supported LBA-Change 00:15:00.466 Dataset Management (09h): Supported LBA-Change 00:15:00.466 Copy (19h): Supported LBA-Change 00:15:00.466 Unknown (79h): Supported LBA-Change 00:15:00.466 Unknown (7Ah): Supported 00:15:00.466 00:15:00.466 Error Log 00:15:00.466 ========= 00:15:00.466 00:15:00.466 Arbitration 00:15:00.466 =========== 00:15:00.466 Arbitration Burst: 1 00:15:00.466 00:15:00.466 Power Management 00:15:00.466 ================ 00:15:00.466 Number of Power States: 1 00:15:00.466 Current Power State: Power State #0 00:15:00.466 Power State #0: 00:15:00.466 Max Power: 0.00 W 00:15:00.466 Non-Operational State: Operational 00:15:00.466 Entry Latency: Not Reported 00:15:00.466 Exit Latency: Not Reported 00:15:00.466 Relative Read Throughput: 0 00:15:00.466 Relative Read Latency: 0 00:15:00.466 Relative Write Throughput: 0 00:15:00.466 Relative Write Latency: 0 00:15:00.466 Idle Power: Not Reported 00:15:00.466 Active Power: Not Reported 00:15:00.466 Non-Operational Permissive Mode: Not Supported 00:15:00.466 00:15:00.466 Health Information 00:15:00.466 ================== 00:15:00.466 Critical Warnings: 00:15:00.466 Available Spare Space: OK 00:15:00.466 Temperature: OK 00:15:00.466 Device Reliability: OK 00:15:00.466 Read Only: No 00:15:00.466 Volatile Memory Backup: OK 00:15:00.466 Current Temperature: 0 Kelvin[2024-07-14 03:02:55.559920] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:00.466 [2024-07-14 03:02:55.559937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:00.466 [2024-07-14 03:02:55.559977] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:00.466 [2024-07-14 03:02:55.559994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.467 [2024-07-14 03:02:55.560005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.467 [2024-07-14 03:02:55.560014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.467 [2024-07-14 03:02:55.560024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:00.467 [2024-07-14 03:02:55.563876] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:00.467 [2024-07-14 03:02:55.563897] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:00.467 [2024-07-14 03:02:55.564547] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:00.467 [2024-07-14 03:02:55.564559] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:00.467 [2024-07-14 03:02:55.565512] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:00.467 [2024-07-14 03:02:55.565535] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:00.467 [2024-07-14 03:02:55.565588] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:00.467 [2024-07-14 03:02:55.567556] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:00.467 (-273 Celsius) 00:15:00.467 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:00.467 Available Spare: 0% 00:15:00.467 Available Spare Threshold: 0% 00:15:00.467 Life Percentage Used: 0% 00:15:00.467 Data Units Read: 0 00:15:00.467 Data Units Written: 0 00:15:00.467 Host Read Commands: 0 00:15:00.467 Host Write Commands: 0 00:15:00.467 Controller Busy Time: 0 minutes 00:15:00.467 Power Cycles: 0 00:15:00.467 Power On Hours: 0 hours 00:15:00.467 Unsafe Shutdowns: 0 00:15:00.467 Unrecoverable Media Errors: 0 00:15:00.467 Lifetime Error Log Entries: 0 00:15:00.467 Warning Temperature Time: 0 minutes 00:15:00.467 Critical Temperature Time: 0 minutes 00:15:00.467 00:15:00.467 Number of Queues 00:15:00.467 ================ 00:15:00.467 Number of I/O Submission Queues: 127 00:15:00.467 Number of I/O Completion Queues: 127 00:15:00.467 00:15:00.467 Active Namespaces 00:15:00.467 ================= 00:15:00.467 Namespace ID:1 00:15:00.467 Error Recovery Timeout: Unlimited 00:15:00.467 Command Set Identifier: NVM (00h) 00:15:00.467 Deallocate: Supported 00:15:00.467 Deallocated/Unwritten Error: Not Supported 00:15:00.467 Deallocated Read Value: Unknown 00:15:00.467 Deallocate in Write Zeroes: Not Supported 00:15:00.467 Deallocated Guard Field: 0xFFFF 00:15:00.467 Flush: Supported 00:15:00.467 Reservation: Supported 00:15:00.467 Namespace Sharing Capabilities: Multiple Controllers 00:15:00.467 Size (in LBAs): 131072 (0GiB) 00:15:00.467 Capacity (in LBAs): 131072 (0GiB) 00:15:00.467 Utilization (in LBAs): 131072 (0GiB) 00:15:00.467 NGUID: EF5F0CC5C7C448B6811B594DA0766F16 00:15:00.467 UUID: ef5f0cc5-c7c4-48b6-811b-594da0766f16 00:15:00.467 Thin Provisioning: Not Supported 00:15:00.467 Per-NS Atomic Units: Yes 00:15:00.467 Atomic Boundary Size (Normal): 0 00:15:00.467 Atomic Boundary Size (PFail): 0 00:15:00.467 Atomic Boundary Offset: 0 00:15:00.467 Maximum Single Source Range Length: 65535 00:15:00.467 Maximum Copy Length: 65535 00:15:00.467 Maximum Source Range Count: 1 00:15:00.467 NGUID/EUI64 Never Reused: No 00:15:00.467 Namespace Write Protected: No 00:15:00.467 Number of LBA Formats: 1 00:15:00.467 Current LBA Format: LBA Format #00 00:15:00.467 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:00.467 00:15:00.467 03:02:55 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:00.467 EAL: No free 2048 kB hugepages reported on node 1 00:15:05.771 Initializing NVMe Controllers 00:15:05.771 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:05.771 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:05.771 Initialization complete. Launching workers. 00:15:05.771 ======================================================== 00:15:05.771 Latency(us) 00:15:05.771 Device Information : IOPS MiB/s Average min max 00:15:05.771 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 36814.73 143.81 3476.18 1134.29 9002.11 00:15:05.771 ======================================================== 00:15:05.771 Total : 36814.73 143.81 3476.18 1134.29 9002.11 00:15:05.771 00:15:05.771 03:03:00 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:05.771 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.043 Initializing NVMe Controllers 00:15:11.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:11.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:11.043 Initialization complete. Launching workers. 00:15:11.043 ======================================================== 00:15:11.043 Latency(us) 00:15:11.043 Device Information : IOPS MiB/s Average min max 00:15:11.043 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15955.31 62.33 8027.64 7708.29 15970.04 00:15:11.043 ======================================================== 00:15:11.043 Total : 15955.31 62.33 8027.64 7708.29 15970.04 00:15:11.043 00:15:11.043 03:03:06 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:11.043 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.333 Initializing NVMe Controllers 00:15:16.333 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.333 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:16.333 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:16.333 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:16.333 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:16.333 Initialization complete. Launching workers. 00:15:16.333 Starting thread on core 2 00:15:16.333 Starting thread on core 3 00:15:16.333 Starting thread on core 1 00:15:16.333 03:03:11 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:16.333 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.669 Initializing NVMe Controllers 00:15:19.669 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.669 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.669 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:19.669 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:19.669 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:19.669 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:19.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:19.669 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:19.669 Initialization complete. Launching workers. 00:15:19.669 Starting thread on core 1 with urgent priority queue 00:15:19.669 Starting thread on core 2 with urgent priority queue 00:15:19.669 Starting thread on core 3 with urgent priority queue 00:15:19.669 Starting thread on core 0 with urgent priority queue 00:15:19.669 SPDK bdev Controller (SPDK1 ) core 0: 6054.33 IO/s 16.52 secs/100000 ios 00:15:19.669 SPDK bdev Controller (SPDK1 ) core 1: 4858.00 IO/s 20.58 secs/100000 ios 00:15:19.669 SPDK bdev Controller (SPDK1 ) core 2: 5734.33 IO/s 17.44 secs/100000 ios 00:15:19.669 SPDK bdev Controller (SPDK1 ) core 3: 6289.33 IO/s 15.90 secs/100000 ios 00:15:19.669 ======================================================== 00:15:19.670 00:15:19.670 03:03:14 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:19.670 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.928 Initializing NVMe Controllers 00:15:19.928 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.928 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:19.928 Namespace ID: 1 size: 0GB 00:15:19.928 Initialization complete. 00:15:19.928 INFO: using host memory buffer for IO 00:15:19.928 Hello world! 00:15:19.928 03:03:15 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:20.188 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.568 Initializing NVMe Controllers 00:15:21.568 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.568 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:21.568 Initialization complete. Launching workers. 00:15:21.568 submit (in ns) avg, min, max = 8750.2, 3475.6, 4998713.3 00:15:21.568 complete (in ns) avg, min, max = 22996.4, 2054.4, 4999966.7 00:15:21.568 00:15:21.568 Submit histogram 00:15:21.568 ================ 00:15:21.568 Range in us Cumulative Count 00:15:21.568 3.461 - 3.484: 0.0845% ( 12) 00:15:21.568 3.484 - 3.508: 0.6341% ( 78) 00:15:21.568 3.508 - 3.532: 1.6345% ( 142) 00:15:21.568 3.532 - 3.556: 5.0162% ( 480) 00:15:21.568 3.556 - 3.579: 10.4340% ( 769) 00:15:21.568 3.579 - 3.603: 17.7892% ( 1044) 00:15:21.568 3.603 - 3.627: 26.1378% ( 1185) 00:15:21.568 3.627 - 3.650: 36.2548% ( 1436) 00:15:21.568 3.650 - 3.674: 44.0961% ( 1113) 00:15:21.568 3.674 - 3.698: 51.2400% ( 1014) 00:15:21.568 3.698 - 3.721: 57.2073% ( 847) 00:15:21.568 3.721 - 3.745: 62.0262% ( 684) 00:15:21.568 3.745 - 3.769: 66.0984% ( 578) 00:15:21.568 3.769 - 3.793: 70.2057% ( 583) 00:15:21.568 3.793 - 3.816: 73.6086% ( 483) 00:15:21.568 3.816 - 3.840: 77.1100% ( 497) 00:15:21.568 3.840 - 3.864: 80.5340% ( 486) 00:15:21.568 3.864 - 3.887: 83.2464% ( 385) 00:15:21.568 3.887 - 3.911: 85.8954% ( 376) 00:15:21.568 3.911 - 3.935: 88.0865% ( 311) 00:15:21.568 3.935 - 3.959: 89.9112% ( 259) 00:15:21.568 3.959 - 3.982: 91.5457% ( 232) 00:15:21.568 3.982 - 4.006: 92.9830% ( 204) 00:15:21.568 4.006 - 4.030: 94.0538% ( 152) 00:15:21.568 4.030 - 4.053: 94.7865% ( 104) 00:15:21.568 4.053 - 4.077: 95.3008% ( 73) 00:15:21.569 4.077 - 4.101: 95.7588% ( 65) 00:15:21.569 4.101 - 4.124: 96.1533% ( 56) 00:15:21.569 4.124 - 4.148: 96.3999% ( 35) 00:15:21.569 4.148 - 4.172: 96.5478% ( 21) 00:15:21.569 4.172 - 4.196: 96.6817% ( 19) 00:15:21.569 4.196 - 4.219: 96.7662% ( 12) 00:15:21.569 4.219 - 4.243: 96.8508% ( 12) 00:15:21.569 4.243 - 4.267: 96.9706% ( 17) 00:15:21.569 4.267 - 4.290: 97.0480% ( 11) 00:15:21.569 4.290 - 4.314: 97.0833% ( 5) 00:15:21.569 4.314 - 4.338: 97.1608% ( 11) 00:15:21.569 4.338 - 4.361: 97.2312% ( 10) 00:15:21.569 4.361 - 4.385: 97.2805% ( 7) 00:15:21.569 4.385 - 4.409: 97.3087% ( 4) 00:15:21.569 4.409 - 4.433: 97.3158% ( 1) 00:15:21.569 4.433 - 4.456: 97.3228% ( 1) 00:15:21.569 4.456 - 4.480: 97.3299% ( 1) 00:15:21.569 4.480 - 4.504: 97.3369% ( 1) 00:15:21.569 4.527 - 4.551: 97.3439% ( 1) 00:15:21.569 4.551 - 4.575: 97.3792% ( 5) 00:15:21.569 4.575 - 4.599: 97.3862% ( 1) 00:15:21.569 4.599 - 4.622: 97.4355% ( 7) 00:15:21.569 4.622 - 4.646: 97.4778% ( 6) 00:15:21.569 4.646 - 4.670: 97.5271% ( 7) 00:15:21.569 4.670 - 4.693: 97.5553% ( 4) 00:15:21.569 4.693 - 4.717: 97.6117% ( 8) 00:15:21.569 4.717 - 4.741: 97.6680% ( 8) 00:15:21.569 4.741 - 4.764: 97.7173% ( 7) 00:15:21.569 4.764 - 4.788: 97.7667% ( 7) 00:15:21.569 4.788 - 4.812: 97.8019% ( 5) 00:15:21.569 4.812 - 4.836: 97.8442% ( 6) 00:15:21.569 4.836 - 4.859: 97.8864% ( 6) 00:15:21.569 4.859 - 4.883: 97.9146% ( 4) 00:15:21.569 4.883 - 4.907: 97.9357% ( 3) 00:15:21.569 4.907 - 4.930: 97.9639% ( 4) 00:15:21.569 4.930 - 4.954: 98.0062% ( 6) 00:15:21.569 4.954 - 4.978: 98.0626% ( 8) 00:15:21.569 4.978 - 5.001: 98.0767% ( 2) 00:15:21.569 5.001 - 5.025: 98.0907% ( 2) 00:15:21.569 5.025 - 5.049: 98.1189% ( 4) 00:15:21.569 5.049 - 5.073: 98.1260% ( 1) 00:15:21.569 5.073 - 5.096: 98.1541% ( 4) 00:15:21.569 5.096 - 5.120: 98.1823% ( 4) 00:15:21.569 5.144 - 5.167: 98.2176% ( 5) 00:15:21.569 5.191 - 5.215: 98.2246% ( 1) 00:15:21.569 5.215 - 5.239: 98.2598% ( 5) 00:15:21.569 5.262 - 5.286: 98.2810% ( 3) 00:15:21.569 5.286 - 5.310: 98.2880% ( 1) 00:15:21.569 5.310 - 5.333: 98.3091% ( 3) 00:15:21.569 5.333 - 5.357: 98.3232% ( 2) 00:15:21.569 5.357 - 5.381: 98.3373% ( 2) 00:15:21.569 5.381 - 5.404: 98.3514% ( 2) 00:15:21.569 5.404 - 5.428: 98.3585% ( 1) 00:15:21.569 5.428 - 5.452: 98.3655% ( 1) 00:15:21.569 5.452 - 5.476: 98.3796% ( 2) 00:15:21.569 5.476 - 5.499: 98.3866% ( 1) 00:15:21.569 5.499 - 5.523: 98.4007% ( 2) 00:15:21.569 5.523 - 5.547: 98.4078% ( 1) 00:15:21.569 5.547 - 5.570: 98.4360% ( 4) 00:15:21.569 5.594 - 5.618: 98.4641% ( 4) 00:15:21.569 5.618 - 5.641: 98.4853% ( 3) 00:15:21.569 5.665 - 5.689: 98.4994% ( 2) 00:15:21.569 5.689 - 5.713: 98.5064% ( 1) 00:15:21.569 5.713 - 5.736: 98.5135% ( 1) 00:15:21.569 5.760 - 5.784: 98.5275% ( 2) 00:15:21.569 5.807 - 5.831: 98.5346% ( 1) 00:15:21.569 5.855 - 5.879: 98.5416% ( 1) 00:15:21.569 5.902 - 5.926: 98.5487% ( 1) 00:15:21.569 5.997 - 6.021: 98.5557% ( 1) 00:15:21.569 6.116 - 6.163: 98.5698% ( 2) 00:15:21.569 6.258 - 6.305: 98.5769% ( 1) 00:15:21.569 6.305 - 6.353: 98.5839% ( 1) 00:15:21.569 6.637 - 6.684: 98.5910% ( 1) 00:15:21.569 6.684 - 6.732: 98.5980% ( 1) 00:15:21.569 6.827 - 6.874: 98.6050% ( 1) 00:15:21.569 7.016 - 7.064: 98.6121% ( 1) 00:15:21.569 7.064 - 7.111: 98.6191% ( 1) 00:15:21.569 7.206 - 7.253: 98.6332% ( 2) 00:15:21.569 7.348 - 7.396: 98.6473% ( 2) 00:15:21.569 7.396 - 7.443: 98.6544% ( 1) 00:15:21.569 7.443 - 7.490: 98.6685% ( 2) 00:15:21.569 7.490 - 7.538: 98.6755% ( 1) 00:15:21.569 7.538 - 7.585: 98.6825% ( 1) 00:15:21.569 7.585 - 7.633: 98.6896% ( 1) 00:15:21.569 7.633 - 7.680: 98.6966% ( 1) 00:15:21.569 7.680 - 7.727: 98.7037% ( 1) 00:15:21.569 7.775 - 7.822: 98.7107% ( 1) 00:15:21.569 7.822 - 7.870: 98.7178% ( 1) 00:15:21.569 7.917 - 7.964: 98.7389% ( 3) 00:15:21.569 8.107 - 8.154: 98.7530% ( 2) 00:15:21.569 8.154 - 8.201: 98.7600% ( 1) 00:15:21.569 8.296 - 8.344: 98.7741% ( 2) 00:15:21.569 8.344 - 8.391: 98.7882% ( 2) 00:15:21.569 8.439 - 8.486: 98.7953% ( 1) 00:15:21.569 8.486 - 8.533: 98.8023% ( 1) 00:15:21.569 8.581 - 8.628: 98.8305% ( 4) 00:15:21.569 8.628 - 8.676: 98.8375% ( 1) 00:15:21.569 9.007 - 9.055: 98.8446% ( 1) 00:15:21.569 9.150 - 9.197: 98.8516% ( 1) 00:15:21.569 9.387 - 9.434: 98.8587% ( 1) 00:15:21.569 9.576 - 9.624: 98.8657% ( 1) 00:15:21.569 9.624 - 9.671: 98.8728% ( 1) 00:15:21.569 9.813 - 9.861: 98.8869% ( 2) 00:15:21.569 10.098 - 10.145: 98.8939% ( 1) 00:15:21.569 10.193 - 10.240: 98.9009% ( 1) 00:15:21.569 10.335 - 10.382: 98.9080% ( 1) 00:15:21.569 10.430 - 10.477: 98.9150% ( 1) 00:15:21.569 10.572 - 10.619: 98.9221% ( 1) 00:15:21.569 10.856 - 10.904: 98.9291% ( 1) 00:15:21.569 11.093 - 11.141: 98.9362% ( 1) 00:15:21.569 11.188 - 11.236: 98.9432% ( 1) 00:15:21.569 11.378 - 11.425: 98.9503% ( 1) 00:15:21.569 11.615 - 11.662: 98.9644% ( 2) 00:15:21.569 12.041 - 12.089: 98.9714% ( 1) 00:15:21.569 12.231 - 12.326: 98.9784% ( 1) 00:15:21.569 13.843 - 13.938: 98.9855% ( 1) 00:15:21.569 14.601 - 14.696: 98.9925% ( 1) 00:15:21.569 14.791 - 14.886: 98.9996% ( 1) 00:15:21.569 14.886 - 14.981: 99.0066% ( 1) 00:15:21.569 15.076 - 15.170: 99.0137% ( 1) 00:15:21.569 17.067 - 17.161: 99.0207% ( 1) 00:15:21.569 17.161 - 17.256: 99.0278% ( 1) 00:15:21.569 17.256 - 17.351: 99.0559% ( 4) 00:15:21.569 17.351 - 17.446: 99.0912% ( 5) 00:15:21.569 17.446 - 17.541: 99.1053% ( 2) 00:15:21.569 17.541 - 17.636: 99.1475% ( 6) 00:15:21.569 17.636 - 17.730: 99.2039% ( 8) 00:15:21.569 17.730 - 17.825: 99.2673% ( 9) 00:15:21.569 17.825 - 17.920: 99.3096% ( 6) 00:15:21.569 17.920 - 18.015: 99.3941% ( 12) 00:15:21.569 18.015 - 18.110: 99.4434% ( 7) 00:15:21.569 18.110 - 18.204: 99.5209% ( 11) 00:15:21.569 18.204 - 18.299: 99.5632% ( 6) 00:15:21.569 18.299 - 18.394: 99.6196% ( 8) 00:15:21.569 18.394 - 18.489: 99.7041% ( 12) 00:15:21.569 18.489 - 18.584: 99.7323% ( 4) 00:15:21.569 18.584 - 18.679: 99.7534% ( 3) 00:15:21.569 18.679 - 18.773: 99.7746% ( 3) 00:15:21.569 18.773 - 18.868: 99.7886% ( 2) 00:15:21.569 18.868 - 18.963: 99.8027% ( 2) 00:15:21.569 18.963 - 19.058: 99.8309% ( 4) 00:15:21.569 19.058 - 19.153: 99.8450% ( 2) 00:15:21.569 19.437 - 19.532: 99.8521% ( 1) 00:15:21.569 19.627 - 19.721: 99.8591% ( 1) 00:15:21.569 19.721 - 19.816: 99.8661% ( 1) 00:15:21.569 21.049 - 21.144: 99.8732% ( 1) 00:15:21.569 22.756 - 22.850: 99.8802% ( 1) 00:15:21.569 3980.705 - 4004.978: 99.9859% ( 15) 00:15:21.569 4004.978 - 4029.250: 99.9930% ( 1) 00:15:21.569 4975.881 - 5000.154: 100.0000% ( 1) 00:15:21.569 00:15:21.569 Complete histogram 00:15:21.569 ================== 00:15:21.569 Range in us Cumulative Count 00:15:21.569 2.050 - 2.062: 0.9159% ( 130) 00:15:21.569 2.062 - 2.074: 7.7920% ( 976) 00:15:21.569 2.074 - 2.086: 20.7482% ( 1839) 00:15:21.569 2.086 - 2.098: 28.8995% ( 1157) 00:15:21.569 2.098 - 2.110: 40.2987% ( 1618) 00:15:21.569 2.110 - 2.121: 56.3759% ( 2282) 00:15:21.569 2.121 - 2.133: 61.4626% ( 722) 00:15:21.569 2.133 - 2.145: 65.4572% ( 567) 00:15:21.569 2.145 - 2.157: 70.7271% ( 748) 00:15:21.569 2.157 - 2.169: 75.2501% ( 642) 00:15:21.569 2.169 - 2.181: 81.5979% ( 901) 00:15:21.569 2.181 - 2.193: 87.7554% ( 874) 00:15:21.569 2.193 - 2.204: 89.3335% ( 224) 00:15:21.569 2.204 - 2.216: 90.7073% ( 195) 00:15:21.569 2.216 - 2.228: 92.0459% ( 190) 00:15:21.569 2.228 - 2.240: 93.0534% ( 143) 00:15:21.569 2.240 - 2.252: 94.1947% ( 162) 00:15:21.569 2.252 - 2.264: 94.9979% ( 114) 00:15:21.569 2.264 - 2.276: 95.2163% ( 31) 00:15:21.569 2.276 - 2.287: 95.4981% ( 40) 00:15:21.569 2.287 - 2.299: 95.6883% ( 27) 00:15:21.569 2.299 - 2.311: 95.8363% ( 21) 00:15:21.569 2.311 - 2.323: 96.0265% ( 27) 00:15:21.569 2.323 - 2.335: 96.1181% ( 13) 00:15:21.569 2.335 - 2.347: 96.2449% ( 18) 00:15:21.569 2.347 - 2.359: 96.4140% ( 24) 00:15:21.569 2.359 - 2.370: 96.7028% ( 41) 00:15:21.569 2.370 - 2.382: 96.9987% ( 42) 00:15:21.569 2.382 - 2.394: 97.2594% ( 37) 00:15:21.569 2.394 - 2.406: 97.4778% ( 31) 00:15:21.569 2.406 - 2.418: 97.6258% ( 21) 00:15:21.569 2.418 - 2.430: 97.7385% ( 16) 00:15:21.569 2.430 - 2.441: 97.8935% ( 22) 00:15:21.569 2.441 - 2.453: 97.9851% ( 13) 00:15:21.569 2.453 - 2.465: 98.0485% ( 9) 00:15:21.569 2.465 - 2.477: 98.0626% ( 2) 00:15:21.569 2.477 - 2.489: 98.0837% ( 3) 00:15:21.569 2.501 - 2.513: 98.1189% ( 5) 00:15:21.569 2.513 - 2.524: 98.1401% ( 3) 00:15:21.569 2.524 - 2.536: 98.1612% ( 3) 00:15:21.570 2.536 - 2.548: 98.1682% ( 1) 00:15:21.570 2.548 - 2.560: 98.1753% ( 1) 00:15:21.570 2.560 - 2.572: 98.1823% ( 1) 00:15:21.570 2.572 - 2.584: 98.2105% ( 4) 00:15:21.570 2.596 - 2.607: 98.2246% ( 2) 00:15:21.570 2.607 - 2.619: 98.2387% ( 2) 00:15:21.570 2.619 - 2.631: 98.2457% ( 1) 00:15:21.570 2.631 - 2.643: 98.2528% ( 1) 00:15:21.570 2.643 - 2.655: 98.2669% ( 2) 00:15:21.570 2.667 - 2.679: 98.2739% ( 1) 00:15:21.570 2.679 - 2.690: 98.2880% ( 2) 00:15:21.570 2.702 - 2.714: 98.3021% ( 2) 00:15:21.570 2.714 - 2.726: 98.3091% ( 1) 00:15:21.570 2.726 - 2.738: 98.3232% ( 2) 00:15:21.570 2.773 - 2.785: 98.3303% ( 1) 00:15:21.570 2.797 - 2.809: 98.3373% ( 1) 00:15:21.570 2.821 - 2.833: 98.3514% ( 2) 00:15:21.570 2.833 - 2.844: 98.3585% ( 1) 00:15:21.570 2.844 - 2.856: 98.3655% ( 1) 00:15:21.570 2.856 - 2.868: 98.3726% ( 1) 00:15:21.570 2.904 - 2.916: 98.3796% ( 1) 00:15:21.570 2.916 - 2.927: 98.3937% ( 2) 00:15:21.570 2.927 - 2.939: 98.4078% ( 2) 00:15:21.570 2.939 - 2.951: 98.4148% ( 1) 00:15:21.570 2.963 - 2.975: 98.4289% ( 2) 00:15:21.570 2.975 - 2.987: 98.4360% ( 1) 00:15:21.570 2.987 - 2.999: 98.4430% ( 1) 00:15:21.570 3.010 - 3.022: 98.4500% ( 1) 00:15:21.570 3.022 - 3.034: 98.4571% ( 1) 00:15:21.570 3.034 - 3.058: 98.4712% ( 2) 00:15:21.570 3.058 - 3.081: 98.4923% ( 3) 00:15:21.570 3.105 - 3.129: 98.5064% ( 2) 00:15:21.570 3.129 - 3.153: 98.5205% ( 2) 00:15:21.570 3.153 - 3.176: 98.5557% ( 5) 00:15:21.570 3.176 - 3.200: 98.5698% ( 2) 00:15:21.570 3.200 - 3.224: 98.5980% ( 4) 00:15:21.570 3.224 - 3.247: 98.6262% ( 4) 00:15:21.570 3.247 - 3.271: 98.6473% ( 3) 00:15:21.570 3.271 - 3.295: 98.6896% ( 6) 00:15:21.570 3.319 - 3.342: 98.6966% ( 1) 00:15:21.570 3.342 - 3.366: 98.7037% ( 1) 00:15:21.570 3.366 - 3.390: 98.7107% ( 1) 00:15:21.570 3.390 - 3.413: 98.7248% ( 2) 00:15:21.570 3.413 - 3.437: 98.7319% ( 1) 00:15:21.570 3.437 - 3.461: 98.7459% ( 2) 00:15:21.570 3.461 - 3.484: 98.7600% ( 2) 00:15:21.570 3.484 - 3.508: 98.7812% ( 3) 00:15:21.570 3.508 - 3.532: 98.7953% ( 2) 00:15:21.570 3.532 - 3.556: 98.8164% ( 3) 00:15:21.570 3.579 - 3.603: 98.8234% ( 1) 00:15:21.570 3.603 - 3.627: 98.8305% ( 1) 00:15:21.570 3.674 - 3.698: 98.8446% ( 2) 00:15:21.570 3.698 - 3.721: 98.8516% ( 1) 00:15:21.570 3.745 - 3.769: 98.8587% ( 1) 00:15:21.570 3.793 - 3.816: 98.8728% ( 2) 00:15:21.570 3.816 - 3.840: 98.8798% ( 1) 00:15:21.570 3.959 - 3.982: 98.8869% ( 1) 00:15:21.570 4.053 - 4.077: 98.9009% ( 2) 00:15:21.570 4.101 - 4.124: 98.9150% ( 2) 00:15:21.570 4.148 - 4.172: 98.9221% ( 1) 00:15:21.570 4.907 - 4.930: 98.9291% ( 1) 00:15:21.570 5.239 - 5.262: 98.9362% ( 1) 00:15:21.570 5.262 - 5.286: 98.9432% ( 1) 00:15:21.570 5.428 - 5.452: 98.9503% ( 1) 00:15:21.570 5.523 - 5.547: 98.9573% ( 1) 00:15:21.570 5.618 - 5.641: 98.9644% ( 1) 00:15:21.570 5.831 - 5.855: 98.9714% ( 1) 00:15:21.570 6.044 - 6.068: 98.9784% ( 1) 00:15:21.570 6.305 - 6.353: 98.9855% ( 1) 00:15:21.570 6.637 - 6.684: 98.9925% ( 1) 00:15:21.570 8.344 - 8.391: 98.9996% ( 1) 00:15:21.570 15.455 - 15.550: 99.0066% ( 1) 00:15:21.570 15.644 - 15.739: 99.0137% ( 1) 00:15:21.570 15.739 - 15.834: 99.0418% ( 4) 00:15:21.570 15.929 - 16.024: 99.0630% ( 3) 00:15:21.570 16.024 - 16.119: 99.0982% ( 5) 00:15:21.570 16.119 - 16.213: 99.1405% ( 6) 00:15:21.570 16.213 - 16.308: 99.1616% ( 3) 00:15:21.570 16.308 - 16.403: 99.1968% ( 5) 00:15:21.570 16.403 - 16.498: 99.2391% ( 6) 00:15:21.570 16.498 - 16.593: 99.2532% ( 2) 00:15:21.570 16.593 - 16.687: 99.2743% ( 3) 00:15:21.570 16.687 - 16.782: 99.3307% ( 8) 00:15:21.570 16.782 - 16.877: 99.3800% ( 7) 00:15:21.570 16.877 - 16.972: 99.3871% ( 1) 00:15:21.570 16.972 - 17.067: 99.4223% ( 5) 00:15:21.570 17.067 - 17.161: 99.4293% ( 1) 00:15:21.570 17.161 - 17.256: 99.4434% ( 2) 00:15:21.570 17.256 - 17.351: 99.4646% ( 3) 00:15:21.570 17.446 - 17.541: 99.4716% ( 1) 00:15:21.570 17.541 - 17.636: 99.4787% ( 1) 00:15:21.570 3009.801 - 3021.938: 99.4857% ( 1) 00:15:21.570 3021.938 - 3034.074: 99.4927% ( 1) 00:15:21.570 3070.483 - 3082.619: 99.4998% ( 1) 00:15:21.570 3155.437 - 3179.710: 99.5068% ( 1) 00:15:21.570 3810.797 - 3835.070: 99.5139% ( 1) 00:15:21.570 3980.705 - 4004.978: 99.9436% ( 61) 00:15:21.570 4004.978 - 4029.250: 99.9789% ( 5) 00:15:21.570 4975.881 - 5000.154: 100.0000% ( 3) 00:15:21.570 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:21.570 [2024-07-14 03:03:16.676046] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:21.570 [ 00:15:21.570 { 00:15:21.570 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:21.570 "subtype": "Discovery", 00:15:21.570 "listen_addresses": [], 00:15:21.570 "allow_any_host": true, 00:15:21.570 "hosts": [] 00:15:21.570 }, 00:15:21.570 { 00:15:21.570 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:21.570 "subtype": "NVMe", 00:15:21.570 "listen_addresses": [ 00:15:21.570 { 00:15:21.570 "transport": "VFIOUSER", 00:15:21.570 "trtype": "VFIOUSER", 00:15:21.570 "adrfam": "IPv4", 00:15:21.570 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:21.570 "trsvcid": "0" 00:15:21.570 } 00:15:21.570 ], 00:15:21.570 "allow_any_host": true, 00:15:21.570 "hosts": [], 00:15:21.570 "serial_number": "SPDK1", 00:15:21.570 "model_number": "SPDK bdev Controller", 00:15:21.570 "max_namespaces": 32, 00:15:21.570 "min_cntlid": 1, 00:15:21.570 "max_cntlid": 65519, 00:15:21.570 "namespaces": [ 00:15:21.570 { 00:15:21.570 "nsid": 1, 00:15:21.570 "bdev_name": "Malloc1", 00:15:21.570 "name": "Malloc1", 00:15:21.570 "nguid": "EF5F0CC5C7C448B6811B594DA0766F16", 00:15:21.570 "uuid": "ef5f0cc5-c7c4-48b6-811b-594da0766f16" 00:15:21.570 } 00:15:21.570 ] 00:15:21.570 }, 00:15:21.570 { 00:15:21.570 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:21.570 "subtype": "NVMe", 00:15:21.570 "listen_addresses": [ 00:15:21.570 { 00:15:21.570 "transport": "VFIOUSER", 00:15:21.570 "trtype": "VFIOUSER", 00:15:21.570 "adrfam": "IPv4", 00:15:21.570 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:21.570 "trsvcid": "0" 00:15:21.570 } 00:15:21.570 ], 00:15:21.570 "allow_any_host": true, 00:15:21.570 "hosts": [], 00:15:21.570 "serial_number": "SPDK2", 00:15:21.570 "model_number": "SPDK bdev Controller", 00:15:21.570 "max_namespaces": 32, 00:15:21.570 "min_cntlid": 1, 00:15:21.570 "max_cntlid": 65519, 00:15:21.570 "namespaces": [ 00:15:21.570 { 00:15:21.570 "nsid": 1, 00:15:21.570 "bdev_name": "Malloc2", 00:15:21.570 "name": "Malloc2", 00:15:21.570 "nguid": "BC2FF3DEAF464CDC95AA7979642C4FD1", 00:15:21.570 "uuid": "bc2ff3de-af46-4cdc-95aa-7979642c4fd1" 00:15:21.570 } 00:15:21.570 ] 00:15:21.570 } 00:15:21.570 ] 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1977726 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:21.570 03:03:16 -- common/autotest_common.sh@1244 -- # local i=0 00:15:21.570 03:03:16 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.570 03:03:16 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:21.570 03:03:16 -- common/autotest_common.sh@1255 -- # return 0 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:21.570 03:03:16 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:21.570 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.829 Malloc3 00:15:21.829 03:03:16 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:22.087 03:03:17 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.087 Asynchronous Event Request test 00:15:22.087 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.087 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:22.087 Registering asynchronous event callbacks... 00:15:22.087 Starting namespace attribute notice tests for all controllers... 00:15:22.087 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:22.087 aer_cb - Changed Namespace 00:15:22.087 Cleaning up... 00:15:22.348 [ 00:15:22.348 { 00:15:22.348 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.348 "subtype": "Discovery", 00:15:22.349 "listen_addresses": [], 00:15:22.349 "allow_any_host": true, 00:15:22.349 "hosts": [] 00:15:22.349 }, 00:15:22.349 { 00:15:22.349 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.349 "subtype": "NVMe", 00:15:22.349 "listen_addresses": [ 00:15:22.349 { 00:15:22.349 "transport": "VFIOUSER", 00:15:22.349 "trtype": "VFIOUSER", 00:15:22.349 "adrfam": "IPv4", 00:15:22.349 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.349 "trsvcid": "0" 00:15:22.349 } 00:15:22.349 ], 00:15:22.349 "allow_any_host": true, 00:15:22.349 "hosts": [], 00:15:22.349 "serial_number": "SPDK1", 00:15:22.349 "model_number": "SPDK bdev Controller", 00:15:22.349 "max_namespaces": 32, 00:15:22.349 "min_cntlid": 1, 00:15:22.349 "max_cntlid": 65519, 00:15:22.349 "namespaces": [ 00:15:22.349 { 00:15:22.349 "nsid": 1, 00:15:22.349 "bdev_name": "Malloc1", 00:15:22.349 "name": "Malloc1", 00:15:22.349 "nguid": "EF5F0CC5C7C448B6811B594DA0766F16", 00:15:22.349 "uuid": "ef5f0cc5-c7c4-48b6-811b-594da0766f16" 00:15:22.349 }, 00:15:22.349 { 00:15:22.349 "nsid": 2, 00:15:22.349 "bdev_name": "Malloc3", 00:15:22.349 "name": "Malloc3", 00:15:22.349 "nguid": "081FC8C60603450D89CD8E4693A70117", 00:15:22.349 "uuid": "081fc8c6-0603-450d-89cd-8e4693a70117" 00:15:22.349 } 00:15:22.349 ] 00:15:22.349 }, 00:15:22.349 { 00:15:22.349 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.349 "subtype": "NVMe", 00:15:22.349 "listen_addresses": [ 00:15:22.349 { 00:15:22.349 "transport": "VFIOUSER", 00:15:22.349 "trtype": "VFIOUSER", 00:15:22.349 "adrfam": "IPv4", 00:15:22.349 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.349 "trsvcid": "0" 00:15:22.349 } 00:15:22.349 ], 00:15:22.349 "allow_any_host": true, 00:15:22.349 "hosts": [], 00:15:22.349 "serial_number": "SPDK2", 00:15:22.349 "model_number": "SPDK bdev Controller", 00:15:22.349 "max_namespaces": 32, 00:15:22.349 "min_cntlid": 1, 00:15:22.349 "max_cntlid": 65519, 00:15:22.349 "namespaces": [ 00:15:22.349 { 00:15:22.349 "nsid": 1, 00:15:22.349 "bdev_name": "Malloc2", 00:15:22.349 "name": "Malloc2", 00:15:22.349 "nguid": "BC2FF3DEAF464CDC95AA7979642C4FD1", 00:15:22.349 "uuid": "bc2ff3de-af46-4cdc-95aa-7979642c4fd1" 00:15:22.349 } 00:15:22.349 ] 00:15:22.349 } 00:15:22.349 ] 00:15:22.349 03:03:17 -- target/nvmf_vfio_user.sh@44 -- # wait 1977726 00:15:22.349 03:03:17 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:22.349 03:03:17 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.349 03:03:17 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.349 03:03:17 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:22.349 [2024-07-14 03:03:17.445414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:22.349 [2024-07-14 03:03:17.445466] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1977760 ] 00:15:22.349 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.349 [2024-07-14 03:03:17.479828] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:22.349 [2024-07-14 03:03:17.482170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.349 [2024-07-14 03:03:17.482200] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1a31f77000 00:15:22.349 [2024-07-14 03:03:17.483178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.484179] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.485183] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.486190] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.487200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.488209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.489214] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.490224] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:22.349 [2024-07-14 03:03:17.491237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:22.349 [2024-07-14 03:03:17.491259] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1a30d2d000 00:15:22.349 [2024-07-14 03:03:17.492377] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.349 [2024-07-14 03:03:17.506467] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:22.349 [2024-07-14 03:03:17.506500] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:22.349 [2024-07-14 03:03:17.511602] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.349 [2024-07-14 03:03:17.511650] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:22.349 [2024-07-14 03:03:17.511733] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:22.349 [2024-07-14 03:03:17.511756] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:22.349 [2024-07-14 03:03:17.511766] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:22.349 [2024-07-14 03:03:17.512609] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:22.349 [2024-07-14 03:03:17.512629] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:22.349 [2024-07-14 03:03:17.512642] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:22.349 [2024-07-14 03:03:17.513620] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:22.349 [2024-07-14 03:03:17.513640] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:22.349 [2024-07-14 03:03:17.513653] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.514624] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:22.349 [2024-07-14 03:03:17.514645] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.515628] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:22.349 [2024-07-14 03:03:17.515648] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:22.349 [2024-07-14 03:03:17.515657] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.515668] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.515777] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:22.349 [2024-07-14 03:03:17.515785] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.515793] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:22.349 [2024-07-14 03:03:17.516635] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:22.349 [2024-07-14 03:03:17.517636] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:22.349 [2024-07-14 03:03:17.518646] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.349 [2024-07-14 03:03:17.519693] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:22.349 [2024-07-14 03:03:17.520656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:22.349 [2024-07-14 03:03:17.520675] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:22.349 [2024-07-14 03:03:17.520685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:22.349 [2024-07-14 03:03:17.520708] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:22.349 [2024-07-14 03:03:17.520720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:22.349 [2024-07-14 03:03:17.520737] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.349 [2024-07-14 03:03:17.520746] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.349 [2024-07-14 03:03:17.520762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.349 [2024-07-14 03:03:17.528894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:22.349 [2024-07-14 03:03:17.528917] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:22.349 [2024-07-14 03:03:17.528926] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:22.349 [2024-07-14 03:03:17.528933] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:22.349 [2024-07-14 03:03:17.528941] nvme_ctrlr.c:1990:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:22.349 [2024-07-14 03:03:17.528949] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:22.350 [2024-07-14 03:03:17.528956] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:22.350 [2024-07-14 03:03:17.528964] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.528981] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.528998] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.536890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.536918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.350 [2024-07-14 03:03:17.536942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.350 [2024-07-14 03:03:17.536954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.350 [2024-07-14 03:03:17.536965] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:22.350 [2024-07-14 03:03:17.536973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.536989] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.537003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.544879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.544896] nvme_ctrlr.c:2878:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:22.350 [2024-07-14 03:03:17.544906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.544917] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.544930] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.544946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.552878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.552951] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.552967] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.552980] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:22.350 [2024-07-14 03:03:17.552989] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:22.350 [2024-07-14 03:03:17.552999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.560879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.560906] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:22.350 [2024-07-14 03:03:17.560933] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.560948] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.560960] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.350 [2024-07-14 03:03:17.560968] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.350 [2024-07-14 03:03:17.560978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.568878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.568905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.568920] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.568933] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:22.350 [2024-07-14 03:03:17.568941] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.350 [2024-07-14 03:03:17.568951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.576876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.576897] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576910] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576924] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576943] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576951] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:22.350 [2024-07-14 03:03:17.576958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:22.350 [2024-07-14 03:03:17.576970] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:22.350 [2024-07-14 03:03:17.576996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.584874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.584901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:22.350 [2024-07-14 03:03:17.592889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:22.350 [2024-07-14 03:03:17.592914] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:22.610 [2024-07-14 03:03:17.600895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:22.610 [2024-07-14 03:03:17.600922] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:22.610 [2024-07-14 03:03:17.608876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:22.610 [2024-07-14 03:03:17.608905] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:22.610 [2024-07-14 03:03:17.608916] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:22.610 [2024-07-14 03:03:17.608922] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:22.610 [2024-07-14 03:03:17.608928] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:22.610 [2024-07-14 03:03:17.608938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:22.610 [2024-07-14 03:03:17.608949] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:22.610 [2024-07-14 03:03:17.608958] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:22.610 [2024-07-14 03:03:17.608967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:22.610 [2024-07-14 03:03:17.608977] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:22.610 [2024-07-14 03:03:17.608985] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:22.610 [2024-07-14 03:03:17.608994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:22.610 [2024-07-14 03:03:17.609006] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:22.610 [2024-07-14 03:03:17.609014] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:22.610 [2024-07-14 03:03:17.609023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:22.610 [2024-07-14 03:03:17.616879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:22.610 [2024-07-14 03:03:17.616909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:22.610 [2024-07-14 03:03:17.616924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:22.610 [2024-07-14 03:03:17.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:22.610 ===================================================== 00:15:22.610 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:22.610 ===================================================== 00:15:22.610 Controller Capabilities/Features 00:15:22.610 ================================ 00:15:22.610 Vendor ID: 4e58 00:15:22.610 Subsystem Vendor ID: 4e58 00:15:22.610 Serial Number: SPDK2 00:15:22.610 Model Number: SPDK bdev Controller 00:15:22.610 Firmware Version: 24.01.1 00:15:22.610 Recommended Arb Burst: 6 00:15:22.610 IEEE OUI Identifier: 8d 6b 50 00:15:22.610 Multi-path I/O 00:15:22.610 May have multiple subsystem ports: Yes 00:15:22.610 May have multiple controllers: Yes 00:15:22.610 Associated with SR-IOV VF: No 00:15:22.610 Max Data Transfer Size: 131072 00:15:22.611 Max Number of Namespaces: 32 00:15:22.611 Max Number of I/O Queues: 127 00:15:22.611 NVMe Specification Version (VS): 1.3 00:15:22.611 NVMe Specification Version (Identify): 1.3 00:15:22.611 Maximum Queue Entries: 256 00:15:22.611 Contiguous Queues Required: Yes 00:15:22.611 Arbitration Mechanisms Supported 00:15:22.611 Weighted Round Robin: Not Supported 00:15:22.611 Vendor Specific: Not Supported 00:15:22.611 Reset Timeout: 15000 ms 00:15:22.611 Doorbell Stride: 4 bytes 00:15:22.611 NVM Subsystem Reset: Not Supported 00:15:22.611 Command Sets Supported 00:15:22.611 NVM Command Set: Supported 00:15:22.611 Boot Partition: Not Supported 00:15:22.611 Memory Page Size Minimum: 4096 bytes 00:15:22.611 Memory Page Size Maximum: 4096 bytes 00:15:22.611 Persistent Memory Region: Not Supported 00:15:22.611 Optional Asynchronous Events Supported 00:15:22.611 Namespace Attribute Notices: Supported 00:15:22.611 Firmware Activation Notices: Not Supported 00:15:22.611 ANA Change Notices: Not Supported 00:15:22.611 PLE Aggregate Log Change Notices: Not Supported 00:15:22.611 LBA Status Info Alert Notices: Not Supported 00:15:22.611 EGE Aggregate Log Change Notices: Not Supported 00:15:22.611 Normal NVM Subsystem Shutdown event: Not Supported 00:15:22.611 Zone Descriptor Change Notices: Not Supported 00:15:22.611 Discovery Log Change Notices: Not Supported 00:15:22.611 Controller Attributes 00:15:22.611 128-bit Host Identifier: Supported 00:15:22.611 Non-Operational Permissive Mode: Not Supported 00:15:22.611 NVM Sets: Not Supported 00:15:22.611 Read Recovery Levels: Not Supported 00:15:22.611 Endurance Groups: Not Supported 00:15:22.611 Predictable Latency Mode: Not Supported 00:15:22.611 Traffic Based Keep ALive: Not Supported 00:15:22.611 Namespace Granularity: Not Supported 00:15:22.611 SQ Associations: Not Supported 00:15:22.611 UUID List: Not Supported 00:15:22.611 Multi-Domain Subsystem: Not Supported 00:15:22.611 Fixed Capacity Management: Not Supported 00:15:22.611 Variable Capacity Management: Not Supported 00:15:22.611 Delete Endurance Group: Not Supported 00:15:22.611 Delete NVM Set: Not Supported 00:15:22.611 Extended LBA Formats Supported: Not Supported 00:15:22.611 Flexible Data Placement Supported: Not Supported 00:15:22.611 00:15:22.611 Controller Memory Buffer Support 00:15:22.611 ================================ 00:15:22.611 Supported: No 00:15:22.611 00:15:22.611 Persistent Memory Region Support 00:15:22.611 ================================ 00:15:22.611 Supported: No 00:15:22.611 00:15:22.611 Admin Command Set Attributes 00:15:22.611 ============================ 00:15:22.611 Security Send/Receive: Not Supported 00:15:22.611 Format NVM: Not Supported 00:15:22.611 Firmware Activate/Download: Not Supported 00:15:22.611 Namespace Management: Not Supported 00:15:22.611 Device Self-Test: Not Supported 00:15:22.611 Directives: Not Supported 00:15:22.611 NVMe-MI: Not Supported 00:15:22.611 Virtualization Management: Not Supported 00:15:22.611 Doorbell Buffer Config: Not Supported 00:15:22.611 Get LBA Status Capability: Not Supported 00:15:22.611 Command & Feature Lockdown Capability: Not Supported 00:15:22.611 Abort Command Limit: 4 00:15:22.611 Async Event Request Limit: 4 00:15:22.611 Number of Firmware Slots: N/A 00:15:22.611 Firmware Slot 1 Read-Only: N/A 00:15:22.611 Firmware Activation Without Reset: N/A 00:15:22.611 Multiple Update Detection Support: N/A 00:15:22.611 Firmware Update Granularity: No Information Provided 00:15:22.611 Per-Namespace SMART Log: No 00:15:22.611 Asymmetric Namespace Access Log Page: Not Supported 00:15:22.611 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:22.611 Command Effects Log Page: Supported 00:15:22.611 Get Log Page Extended Data: Supported 00:15:22.611 Telemetry Log Pages: Not Supported 00:15:22.611 Persistent Event Log Pages: Not Supported 00:15:22.611 Supported Log Pages Log Page: May Support 00:15:22.611 Commands Supported & Effects Log Page: Not Supported 00:15:22.611 Feature Identifiers & Effects Log Page:May Support 00:15:22.611 NVMe-MI Commands & Effects Log Page: May Support 00:15:22.611 Data Area 4 for Telemetry Log: Not Supported 00:15:22.611 Error Log Page Entries Supported: 128 00:15:22.611 Keep Alive: Supported 00:15:22.611 Keep Alive Granularity: 10000 ms 00:15:22.611 00:15:22.611 NVM Command Set Attributes 00:15:22.611 ========================== 00:15:22.611 Submission Queue Entry Size 00:15:22.611 Max: 64 00:15:22.611 Min: 64 00:15:22.611 Completion Queue Entry Size 00:15:22.611 Max: 16 00:15:22.611 Min: 16 00:15:22.611 Number of Namespaces: 32 00:15:22.611 Compare Command: Supported 00:15:22.611 Write Uncorrectable Command: Not Supported 00:15:22.611 Dataset Management Command: Supported 00:15:22.611 Write Zeroes Command: Supported 00:15:22.611 Set Features Save Field: Not Supported 00:15:22.611 Reservations: Not Supported 00:15:22.611 Timestamp: Not Supported 00:15:22.611 Copy: Supported 00:15:22.611 Volatile Write Cache: Present 00:15:22.611 Atomic Write Unit (Normal): 1 00:15:22.611 Atomic Write Unit (PFail): 1 00:15:22.611 Atomic Compare & Write Unit: 1 00:15:22.611 Fused Compare & Write: Supported 00:15:22.611 Scatter-Gather List 00:15:22.611 SGL Command Set: Supported (Dword aligned) 00:15:22.611 SGL Keyed: Not Supported 00:15:22.611 SGL Bit Bucket Descriptor: Not Supported 00:15:22.611 SGL Metadata Pointer: Not Supported 00:15:22.611 Oversized SGL: Not Supported 00:15:22.611 SGL Metadata Address: Not Supported 00:15:22.611 SGL Offset: Not Supported 00:15:22.611 Transport SGL Data Block: Not Supported 00:15:22.611 Replay Protected Memory Block: Not Supported 00:15:22.611 00:15:22.611 Firmware Slot Information 00:15:22.611 ========================= 00:15:22.611 Active slot: 1 00:15:22.611 Slot 1 Firmware Revision: 24.01.1 00:15:22.611 00:15:22.611 00:15:22.611 Commands Supported and Effects 00:15:22.611 ============================== 00:15:22.611 Admin Commands 00:15:22.611 -------------- 00:15:22.611 Get Log Page (02h): Supported 00:15:22.611 Identify (06h): Supported 00:15:22.611 Abort (08h): Supported 00:15:22.611 Set Features (09h): Supported 00:15:22.611 Get Features (0Ah): Supported 00:15:22.611 Asynchronous Event Request (0Ch): Supported 00:15:22.611 Keep Alive (18h): Supported 00:15:22.611 I/O Commands 00:15:22.611 ------------ 00:15:22.611 Flush (00h): Supported LBA-Change 00:15:22.611 Write (01h): Supported LBA-Change 00:15:22.611 Read (02h): Supported 00:15:22.611 Compare (05h): Supported 00:15:22.611 Write Zeroes (08h): Supported LBA-Change 00:15:22.611 Dataset Management (09h): Supported LBA-Change 00:15:22.611 Copy (19h): Supported LBA-Change 00:15:22.611 Unknown (79h): Supported LBA-Change 00:15:22.611 Unknown (7Ah): Supported 00:15:22.611 00:15:22.611 Error Log 00:15:22.611 ========= 00:15:22.611 00:15:22.611 Arbitration 00:15:22.611 =========== 00:15:22.611 Arbitration Burst: 1 00:15:22.611 00:15:22.611 Power Management 00:15:22.611 ================ 00:15:22.611 Number of Power States: 1 00:15:22.611 Current Power State: Power State #0 00:15:22.611 Power State #0: 00:15:22.611 Max Power: 0.00 W 00:15:22.611 Non-Operational State: Operational 00:15:22.611 Entry Latency: Not Reported 00:15:22.611 Exit Latency: Not Reported 00:15:22.611 Relative Read Throughput: 0 00:15:22.611 Relative Read Latency: 0 00:15:22.611 Relative Write Throughput: 0 00:15:22.611 Relative Write Latency: 0 00:15:22.611 Idle Power: Not Reported 00:15:22.611 Active Power: Not Reported 00:15:22.611 Non-Operational Permissive Mode: Not Supported 00:15:22.611 00:15:22.611 Health Information 00:15:22.611 ================== 00:15:22.611 Critical Warnings: 00:15:22.611 Available Spare Space: OK 00:15:22.611 Temperature: OK 00:15:22.611 Device Reliability: OK 00:15:22.611 Read Only: No 00:15:22.611 Volatile Memory Backup: OK 00:15:22.611 Current Temperature: 0 Kelvin[2024-07-14 03:03:17.617066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:22.611 [2024-07-14 03:03:17.624892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:22.611 [2024-07-14 03:03:17.624940] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:22.611 [2024-07-14 03:03:17.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.611 [2024-07-14 03:03:17.624968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.611 [2024-07-14 03:03:17.624977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.611 [2024-07-14 03:03:17.624987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:22.611 [2024-07-14 03:03:17.625064] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:22.611 [2024-07-14 03:03:17.625084] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:22.611 [2024-07-14 03:03:17.626110] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:22.612 [2024-07-14 03:03:17.626126] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:22.612 [2024-07-14 03:03:17.627078] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:22.612 [2024-07-14 03:03:17.627102] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:22.612 [2024-07-14 03:03:17.627153] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:22.612 [2024-07-14 03:03:17.628360] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:22.612 (-273 Celsius) 00:15:22.612 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:22.612 Available Spare: 0% 00:15:22.612 Available Spare Threshold: 0% 00:15:22.612 Life Percentage Used: 0% 00:15:22.612 Data Units Read: 0 00:15:22.612 Data Units Written: 0 00:15:22.612 Host Read Commands: 0 00:15:22.612 Host Write Commands: 0 00:15:22.612 Controller Busy Time: 0 minutes 00:15:22.612 Power Cycles: 0 00:15:22.612 Power On Hours: 0 hours 00:15:22.612 Unsafe Shutdowns: 0 00:15:22.612 Unrecoverable Media Errors: 0 00:15:22.612 Lifetime Error Log Entries: 0 00:15:22.612 Warning Temperature Time: 0 minutes 00:15:22.612 Critical Temperature Time: 0 minutes 00:15:22.612 00:15:22.612 Number of Queues 00:15:22.612 ================ 00:15:22.612 Number of I/O Submission Queues: 127 00:15:22.612 Number of I/O Completion Queues: 127 00:15:22.612 00:15:22.612 Active Namespaces 00:15:22.612 ================= 00:15:22.612 Namespace ID:1 00:15:22.612 Error Recovery Timeout: Unlimited 00:15:22.612 Command Set Identifier: NVM (00h) 00:15:22.612 Deallocate: Supported 00:15:22.612 Deallocated/Unwritten Error: Not Supported 00:15:22.612 Deallocated Read Value: Unknown 00:15:22.612 Deallocate in Write Zeroes: Not Supported 00:15:22.612 Deallocated Guard Field: 0xFFFF 00:15:22.612 Flush: Supported 00:15:22.612 Reservation: Supported 00:15:22.612 Namespace Sharing Capabilities: Multiple Controllers 00:15:22.612 Size (in LBAs): 131072 (0GiB) 00:15:22.612 Capacity (in LBAs): 131072 (0GiB) 00:15:22.612 Utilization (in LBAs): 131072 (0GiB) 00:15:22.612 NGUID: BC2FF3DEAF464CDC95AA7979642C4FD1 00:15:22.612 UUID: bc2ff3de-af46-4cdc-95aa-7979642c4fd1 00:15:22.612 Thin Provisioning: Not Supported 00:15:22.612 Per-NS Atomic Units: Yes 00:15:22.612 Atomic Boundary Size (Normal): 0 00:15:22.612 Atomic Boundary Size (PFail): 0 00:15:22.612 Atomic Boundary Offset: 0 00:15:22.612 Maximum Single Source Range Length: 65535 00:15:22.612 Maximum Copy Length: 65535 00:15:22.612 Maximum Source Range Count: 1 00:15:22.612 NGUID/EUI64 Never Reused: No 00:15:22.612 Namespace Write Protected: No 00:15:22.612 Number of LBA Formats: 1 00:15:22.612 Current LBA Format: LBA Format #00 00:15:22.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:22.612 00:15:22.612 03:03:17 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:22.612 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.891 Initializing NVMe Controllers 00:15:27.891 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:27.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:27.891 Initialization complete. Launching workers. 00:15:27.891 ======================================================== 00:15:27.891 Latency(us) 00:15:27.891 Device Information : IOPS MiB/s Average min max 00:15:27.891 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 37752.82 147.47 3389.44 1151.19 7390.33 00:15:27.891 ======================================================== 00:15:27.891 Total : 37752.82 147.47 3389.44 1151.19 7390.33 00:15:27.891 00:15:27.891 03:03:23 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:27.891 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.211 Initializing NVMe Controllers 00:15:33.211 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:33.211 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:33.211 Initialization complete. Launching workers. 00:15:33.211 ======================================================== 00:15:33.211 Latency(us) 00:15:33.211 Device Information : IOPS MiB/s Average min max 00:15:33.211 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36451.01 142.39 3511.08 1148.99 7632.79 00:15:33.211 ======================================================== 00:15:33.211 Total : 36451.01 142.39 3511.08 1148.99 7632.79 00:15:33.211 00:15:33.211 03:03:28 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:33.211 EAL: No free 2048 kB hugepages reported on node 1 00:15:38.482 Initializing NVMe Controllers 00:15:38.482 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.482 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:38.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:38.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:38.482 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:38.482 Initialization complete. Launching workers. 00:15:38.482 Starting thread on core 2 00:15:38.482 Starting thread on core 3 00:15:38.482 Starting thread on core 1 00:15:38.482 03:03:33 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:38.482 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.774 Initializing NVMe Controllers 00:15:41.774 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.774 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:41.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:41.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:41.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:41.774 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:41.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:41.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:41.774 Initialization complete. Launching workers. 00:15:41.774 Starting thread on core 1 with urgent priority queue 00:15:41.774 Starting thread on core 2 with urgent priority queue 00:15:41.774 Starting thread on core 3 with urgent priority queue 00:15:41.774 Starting thread on core 0 with urgent priority queue 00:15:41.774 SPDK bdev Controller (SPDK2 ) core 0: 5562.33 IO/s 17.98 secs/100000 ios 00:15:41.774 SPDK bdev Controller (SPDK2 ) core 1: 5614.00 IO/s 17.81 secs/100000 ios 00:15:41.774 SPDK bdev Controller (SPDK2 ) core 2: 5891.33 IO/s 16.97 secs/100000 ios 00:15:41.774 SPDK bdev Controller (SPDK2 ) core 3: 5781.00 IO/s 17.30 secs/100000 ios 00:15:41.774 ======================================================== 00:15:41.774 00:15:41.774 03:03:36 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:41.774 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.032 Initializing NVMe Controllers 00:15:42.032 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.032 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:42.032 Namespace ID: 1 size: 0GB 00:15:42.032 Initialization complete. 00:15:42.032 INFO: using host memory buffer for IO 00:15:42.032 Hello world! 00:15:42.032 03:03:37 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:42.032 EAL: No free 2048 kB hugepages reported on node 1 00:15:43.411 Initializing NVMe Controllers 00:15:43.411 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.411 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:43.411 Initialization complete. Launching workers. 00:15:43.411 submit (in ns) avg, min, max = 7493.8, 3443.3, 4015536.7 00:15:43.411 complete (in ns) avg, min, max = 24959.0, 2037.8, 5012308.9 00:15:43.411 00:15:43.411 Submit histogram 00:15:43.411 ================ 00:15:43.411 Range in us Cumulative Count 00:15:43.411 3.437 - 3.461: 0.6100% ( 85) 00:15:43.411 3.461 - 3.484: 1.6003% ( 138) 00:15:43.412 3.484 - 3.508: 4.7147% ( 434) 00:15:43.412 3.508 - 3.532: 10.2189% ( 767) 00:15:43.412 3.532 - 3.556: 19.0527% ( 1231) 00:15:43.412 3.556 - 3.579: 29.0994% ( 1400) 00:15:43.412 3.579 - 3.603: 38.0768% ( 1251) 00:15:43.412 3.603 - 3.627: 45.5257% ( 1038) 00:15:43.412 3.627 - 3.650: 53.1109% ( 1057) 00:15:43.412 3.650 - 3.674: 59.1747% ( 845) 00:15:43.412 3.674 - 3.698: 64.0043% ( 673) 00:15:43.412 3.698 - 3.721: 67.6785% ( 512) 00:15:43.412 3.721 - 3.745: 70.2835% ( 363) 00:15:43.412 3.745 - 3.769: 73.4769% ( 445) 00:15:43.412 3.769 - 3.793: 76.8999% ( 477) 00:15:43.412 3.793 - 3.816: 80.3875% ( 486) 00:15:43.412 3.816 - 3.840: 83.7388% ( 467) 00:15:43.412 3.840 - 3.864: 86.3581% ( 365) 00:15:43.412 3.864 - 3.887: 88.2741% ( 267) 00:15:43.412 3.887 - 3.911: 90.0251% ( 244) 00:15:43.412 3.911 - 3.935: 91.8407% ( 253) 00:15:43.412 3.935 - 3.959: 92.9171% ( 150) 00:15:43.412 3.959 - 3.982: 93.7352% ( 114) 00:15:43.412 3.982 - 4.006: 94.6035% ( 121) 00:15:43.412 4.006 - 4.030: 95.2781% ( 94) 00:15:43.412 4.030 - 4.053: 95.8091% ( 74) 00:15:43.412 4.053 - 4.077: 96.1751% ( 51) 00:15:43.412 4.077 - 4.101: 96.3617% ( 26) 00:15:43.412 4.101 - 4.124: 96.5267% ( 23) 00:15:43.412 4.124 - 4.148: 96.6990% ( 24) 00:15:43.412 4.148 - 4.172: 96.7851% ( 12) 00:15:43.412 4.172 - 4.196: 96.8855% ( 14) 00:15:43.412 4.196 - 4.219: 97.0362% ( 21) 00:15:43.412 4.219 - 4.243: 97.1080% ( 10) 00:15:43.412 4.243 - 4.267: 97.1654% ( 8) 00:15:43.412 4.267 - 4.290: 97.2228% ( 8) 00:15:43.412 4.290 - 4.314: 97.2874% ( 9) 00:15:43.412 4.314 - 4.338: 97.3089% ( 3) 00:15:43.412 4.338 - 4.361: 97.3592% ( 7) 00:15:43.412 4.361 - 4.385: 97.3879% ( 4) 00:15:43.412 4.385 - 4.409: 97.3950% ( 1) 00:15:43.412 4.409 - 4.433: 97.4166% ( 3) 00:15:43.412 4.433 - 4.456: 97.4238% ( 1) 00:15:43.412 4.527 - 4.551: 97.4381% ( 2) 00:15:43.412 4.575 - 4.599: 97.4596% ( 3) 00:15:43.412 4.622 - 4.646: 97.5099% ( 7) 00:15:43.412 4.646 - 4.670: 97.5386% ( 4) 00:15:43.412 4.670 - 4.693: 97.5745% ( 5) 00:15:43.412 4.693 - 4.717: 97.6032% ( 4) 00:15:43.412 4.717 - 4.741: 97.6247% ( 3) 00:15:43.412 4.741 - 4.764: 97.6390% ( 2) 00:15:43.412 4.764 - 4.788: 97.6964% ( 8) 00:15:43.412 4.788 - 4.812: 97.7610% ( 9) 00:15:43.412 4.812 - 4.836: 97.7826% ( 3) 00:15:43.412 4.836 - 4.859: 97.8041% ( 3) 00:15:43.412 4.859 - 4.883: 97.8184% ( 2) 00:15:43.412 4.883 - 4.907: 97.8471% ( 4) 00:15:43.412 4.907 - 4.930: 97.8902% ( 6) 00:15:43.412 4.930 - 4.954: 97.9117% ( 3) 00:15:43.412 4.954 - 4.978: 97.9404% ( 4) 00:15:43.412 4.978 - 5.001: 97.9835% ( 6) 00:15:43.412 5.001 - 5.025: 98.0194% ( 5) 00:15:43.412 5.025 - 5.049: 98.0481% ( 4) 00:15:43.412 5.049 - 5.073: 98.0983% ( 7) 00:15:43.412 5.073 - 5.096: 98.1055% ( 1) 00:15:43.412 5.096 - 5.120: 98.1342% ( 4) 00:15:43.412 5.120 - 5.144: 98.1629% ( 4) 00:15:43.412 5.144 - 5.167: 98.1988% ( 5) 00:15:43.412 5.167 - 5.191: 98.2131% ( 2) 00:15:43.412 5.191 - 5.215: 98.2203% ( 1) 00:15:43.412 5.215 - 5.239: 98.2275% ( 1) 00:15:43.412 5.239 - 5.262: 98.2347% ( 1) 00:15:43.412 5.262 - 5.286: 98.2490% ( 2) 00:15:43.412 5.286 - 5.310: 98.2562% ( 1) 00:15:43.412 5.310 - 5.333: 98.2634% ( 1) 00:15:43.412 5.381 - 5.404: 98.2849% ( 3) 00:15:43.412 5.404 - 5.428: 98.2992% ( 2) 00:15:43.412 5.428 - 5.452: 98.3064% ( 1) 00:15:43.412 5.499 - 5.523: 98.3136% ( 1) 00:15:43.412 5.547 - 5.570: 98.3280% ( 2) 00:15:43.412 5.594 - 5.618: 98.3351% ( 1) 00:15:43.412 5.665 - 5.689: 98.3495% ( 2) 00:15:43.412 5.713 - 5.736: 98.3710% ( 3) 00:15:43.412 5.736 - 5.760: 98.3854% ( 2) 00:15:43.412 5.902 - 5.926: 98.3925% ( 1) 00:15:43.412 6.044 - 6.068: 98.3997% ( 1) 00:15:43.412 6.068 - 6.116: 98.4069% ( 1) 00:15:43.412 6.210 - 6.258: 98.4141% ( 1) 00:15:43.412 6.258 - 6.305: 98.4212% ( 1) 00:15:43.412 6.400 - 6.447: 98.4284% ( 1) 00:15:43.412 6.447 - 6.495: 98.4356% ( 1) 00:15:43.412 6.495 - 6.542: 98.4428% ( 1) 00:15:43.412 6.542 - 6.590: 98.4499% ( 1) 00:15:43.412 6.590 - 6.637: 98.4643% ( 2) 00:15:43.412 6.779 - 6.827: 98.4715% ( 1) 00:15:43.412 6.827 - 6.874: 98.4787% ( 1) 00:15:43.412 6.921 - 6.969: 98.4930% ( 2) 00:15:43.412 6.969 - 7.016: 98.5002% ( 1) 00:15:43.412 7.064 - 7.111: 98.5289% ( 4) 00:15:43.412 7.111 - 7.159: 98.5361% ( 1) 00:15:43.412 7.206 - 7.253: 98.5504% ( 2) 00:15:43.412 7.253 - 7.301: 98.5648% ( 2) 00:15:43.412 7.348 - 7.396: 98.5719% ( 1) 00:15:43.412 7.396 - 7.443: 98.5791% ( 1) 00:15:43.412 7.443 - 7.490: 98.5863% ( 1) 00:15:43.412 7.490 - 7.538: 98.6150% ( 4) 00:15:43.412 7.538 - 7.585: 98.6365% ( 3) 00:15:43.412 7.585 - 7.633: 98.6437% ( 1) 00:15:43.412 7.727 - 7.775: 98.6509% ( 1) 00:15:43.412 7.775 - 7.822: 98.6652% ( 2) 00:15:43.412 7.822 - 7.870: 98.6796% ( 2) 00:15:43.412 7.964 - 8.012: 98.6868% ( 1) 00:15:43.412 8.012 - 8.059: 98.6939% ( 1) 00:15:43.412 8.059 - 8.107: 98.7011% ( 1) 00:15:43.412 8.154 - 8.201: 98.7083% ( 1) 00:15:43.412 8.249 - 8.296: 98.7226% ( 2) 00:15:43.412 8.296 - 8.344: 98.7298% ( 1) 00:15:43.412 8.344 - 8.391: 98.7370% ( 1) 00:15:43.412 8.391 - 8.439: 98.7442% ( 1) 00:15:43.412 8.439 - 8.486: 98.7513% ( 1) 00:15:43.412 8.486 - 8.533: 98.7585% ( 1) 00:15:43.412 8.581 - 8.628: 98.7729% ( 2) 00:15:43.412 8.628 - 8.676: 98.7872% ( 2) 00:15:43.412 8.676 - 8.723: 98.7944% ( 1) 00:15:43.412 8.818 - 8.865: 98.8016% ( 1) 00:15:43.412 8.960 - 9.007: 98.8088% ( 1) 00:15:43.412 9.576 - 9.624: 98.8159% ( 1) 00:15:43.412 9.671 - 9.719: 98.8303% ( 2) 00:15:43.412 9.908 - 9.956: 98.8375% ( 1) 00:15:43.412 10.003 - 10.050: 98.8446% ( 1) 00:15:43.412 10.145 - 10.193: 98.8518% ( 1) 00:15:43.412 10.193 - 10.240: 98.8590% ( 1) 00:15:43.412 10.240 - 10.287: 98.8662% ( 1) 00:15:43.412 10.619 - 10.667: 98.8733% ( 1) 00:15:43.412 10.761 - 10.809: 98.8805% ( 1) 00:15:43.412 10.856 - 10.904: 98.8949% ( 2) 00:15:43.412 11.046 - 11.093: 98.9020% ( 1) 00:15:43.412 11.188 - 11.236: 98.9092% ( 1) 00:15:43.412 11.236 - 11.283: 98.9164% ( 1) 00:15:43.412 11.378 - 11.425: 98.9236% ( 1) 00:15:43.412 11.662 - 11.710: 98.9307% ( 1) 00:15:43.412 11.710 - 11.757: 98.9379% ( 1) 00:15:43.412 12.041 - 12.089: 98.9451% ( 1) 00:15:43.412 12.089 - 12.136: 98.9523% ( 1) 00:15:43.412 12.231 - 12.326: 98.9595% ( 1) 00:15:43.412 12.326 - 12.421: 98.9666% ( 1) 00:15:43.412 12.421 - 12.516: 98.9738% ( 1) 00:15:43.412 12.516 - 12.610: 98.9882% ( 2) 00:15:43.412 13.179 - 13.274: 99.0025% ( 2) 00:15:43.412 13.274 - 13.369: 99.0240% ( 3) 00:15:43.412 13.559 - 13.653: 99.0384% ( 2) 00:15:43.412 13.653 - 13.748: 99.0456% ( 1) 00:15:43.412 14.033 - 14.127: 99.0527% ( 1) 00:15:43.412 14.222 - 14.317: 99.0599% ( 1) 00:15:43.412 14.317 - 14.412: 99.0671% ( 1) 00:15:43.412 14.412 - 14.507: 99.0743% ( 1) 00:15:43.412 14.507 - 14.601: 99.0814% ( 1) 00:15:43.412 14.601 - 14.696: 99.0886% ( 1) 00:15:43.412 17.161 - 17.256: 99.0958% ( 1) 00:15:43.412 17.351 - 17.446: 99.1102% ( 2) 00:15:43.412 17.446 - 17.541: 99.1460% ( 5) 00:15:43.412 17.541 - 17.636: 99.1891% ( 6) 00:15:43.412 17.636 - 17.730: 99.2250% ( 5) 00:15:43.412 17.730 - 17.825: 99.2680% ( 6) 00:15:43.412 17.825 - 17.920: 99.3398% ( 10) 00:15:43.412 17.920 - 18.015: 99.3972% ( 8) 00:15:43.412 18.015 - 18.110: 99.4474% ( 7) 00:15:43.412 18.110 - 18.204: 99.4977% ( 7) 00:15:43.412 18.204 - 18.299: 99.5694% ( 10) 00:15:43.412 18.299 - 18.394: 99.5838% ( 2) 00:15:43.412 18.394 - 18.489: 99.6268% ( 6) 00:15:43.412 18.489 - 18.584: 99.6555% ( 4) 00:15:43.412 18.584 - 18.679: 99.6986% ( 6) 00:15:43.412 18.679 - 18.773: 99.7201% ( 3) 00:15:43.412 18.773 - 18.868: 99.7560% ( 5) 00:15:43.412 18.868 - 18.963: 99.7704% ( 2) 00:15:43.412 18.963 - 19.058: 99.7919% ( 3) 00:15:43.412 19.058 - 19.153: 99.7991% ( 1) 00:15:43.412 19.153 - 19.247: 99.8134% ( 2) 00:15:43.412 19.247 - 19.342: 99.8206% ( 1) 00:15:43.412 19.342 - 19.437: 99.8349% ( 2) 00:15:43.412 19.437 - 19.532: 99.8493% ( 2) 00:15:43.412 19.532 - 19.627: 99.8565% ( 1) 00:15:43.412 20.101 - 20.196: 99.8637% ( 1) 00:15:43.412 20.480 - 20.575: 99.8708% ( 1) 00:15:43.412 22.281 - 22.376: 99.8780% ( 1) 00:15:43.412 25.221 - 25.410: 99.8852% ( 1) 00:15:43.412 27.307 - 27.496: 99.8924% ( 1) 00:15:43.412 27.496 - 27.686: 99.8995% ( 1) 00:15:43.412 27.686 - 27.876: 99.9067% ( 1) 00:15:43.412 2754.939 - 2767.076: 99.9139% ( 1) 00:15:43.412 3980.705 - 4004.978: 99.9713% ( 8) 00:15:43.412 4004.978 - 4029.250: 100.0000% ( 4) 00:15:43.412 00:15:43.412 Complete histogram 00:15:43.412 ================== 00:15:43.412 Range in us Cumulative Count 00:15:43.412 2.027 - 2.039: 0.0144% ( 2) 00:15:43.412 2.039 - 2.050: 6.4227% ( 893) 00:15:43.412 2.050 - 2.062: 15.2924% ( 1236) 00:15:43.413 2.062 - 2.074: 17.6175% ( 324) 00:15:43.413 2.074 - 2.086: 44.4708% ( 3742) 00:15:43.413 2.086 - 2.098: 60.2440% ( 2198) 00:15:43.413 2.098 - 2.110: 62.6050% ( 329) 00:15:43.413 2.110 - 2.121: 67.3197% ( 657) 00:15:43.413 2.121 - 2.133: 69.0635% ( 243) 00:15:43.413 2.133 - 2.145: 72.0201% ( 412) 00:15:43.413 2.145 - 2.157: 84.0617% ( 1678) 00:15:43.413 2.157 - 2.169: 87.9656% ( 544) 00:15:43.413 2.169 - 2.181: 89.1209% ( 161) 00:15:43.413 2.181 - 2.193: 90.5131% ( 194) 00:15:43.413 2.193 - 2.204: 91.1590% ( 90) 00:15:43.413 2.204 - 2.216: 92.3215% ( 162) 00:15:43.413 2.216 - 2.228: 94.5461% ( 310) 00:15:43.413 2.228 - 2.240: 95.0915% ( 76) 00:15:43.413 2.240 - 2.252: 95.4216% ( 46) 00:15:43.413 2.252 - 2.264: 95.6225% ( 28) 00:15:43.413 2.264 - 2.276: 95.7302% ( 15) 00:15:43.413 2.276 - 2.287: 95.9024% ( 24) 00:15:43.413 2.287 - 2.299: 95.9957% ( 13) 00:15:43.413 2.299 - 2.311: 96.0388% ( 6) 00:15:43.413 2.311 - 2.323: 96.1751% ( 19) 00:15:43.413 2.323 - 2.335: 96.3186% ( 20) 00:15:43.413 2.335 - 2.347: 96.5913% ( 38) 00:15:43.413 2.347 - 2.359: 96.8568% ( 37) 00:15:43.413 2.359 - 2.370: 97.1869% ( 46) 00:15:43.413 2.370 - 2.382: 97.4740% ( 40) 00:15:43.413 2.382 - 2.394: 97.7395% ( 37) 00:15:43.413 2.394 - 2.406: 97.9548% ( 30) 00:15:43.413 2.406 - 2.418: 98.1198% ( 23) 00:15:43.413 2.418 - 2.430: 98.1773% ( 8) 00:15:43.413 2.430 - 2.441: 98.2562% ( 11) 00:15:43.413 2.441 - 2.453: 98.2992% ( 6) 00:15:43.413 2.453 - 2.465: 98.3208% ( 3) 00:15:43.413 2.465 - 2.477: 98.3567% ( 5) 00:15:43.413 2.477 - 2.489: 98.3782% ( 3) 00:15:43.413 2.489 - 2.501: 98.3925% ( 2) 00:15:43.413 2.513 - 2.524: 98.3997% ( 1) 00:15:43.413 2.524 - 2.536: 98.4069% ( 1) 00:15:43.413 2.560 - 2.572: 98.4141% ( 1) 00:15:43.413 2.572 - 2.584: 98.4356% ( 3) 00:15:43.413 2.584 - 2.596: 98.4428% ( 1) 00:15:43.413 2.607 - 2.619: 98.4571% ( 2) 00:15:43.413 2.619 - 2.631: 98.4643% ( 1) 00:15:43.413 2.643 - 2.655: 98.4715% ( 1) 00:15:43.413 2.667 - 2.679: 98.4787% ( 1) 00:15:43.413 2.690 - 2.702: 98.4930% ( 2) 00:15:43.413 2.726 - 2.738: 98.5002% ( 1) 00:15:43.413 2.738 - 2.750: 98.5074% ( 1) 00:15:43.413 2.773 - 2.785: 98.5145% ( 1) 00:15:43.413 2.833 - 2.844: 98.5217% ( 1) 00:15:43.413 2.904 - 2.916: 98.5361% ( 2) 00:15:43.413 2.951 - 2.963: 98.5432% ( 1) 00:15:43.413 2.999 - 3.010: 98.5504% ( 1) 00:15:43.413 3.010 - 3.022: 98.5576% ( 1) 00:15:43.413 3.058 - 3.081: 98.5719% ( 2) 00:15:43.413 3.081 - 3.105: 98.5863% ( 2) 00:15:43.413 3.105 - 3.129: 98.6006% ( 2) 00:15:43.413 3.129 - 3.153: 98.6150% ( 2) 00:15:43.413 3.153 - 3.176: 98.6222% ( 1) 00:15:43.413 3.200 - 3.224: 98.6437% ( 3) 00:15:43.413 3.224 - 3.247: 98.6509% ( 1) 00:15:43.413 3.247 - 3.271: 98.6796% ( 4) 00:15:43.413 3.271 - 3.295: 98.6939% ( 2) 00:15:43.413 3.295 - 3.319: 98.7083% ( 2) 00:15:43.413 3.319 - 3.342: 98.7155% ( 1) 00:15:43.413 3.342 - 3.366: 98.7226% ( 1) 00:15:43.413 3.366 - 3.390: 98.7442% ( 3) 00:15:43.413 3.413 - 3.437: 98.7729% ( 4) 00:15:43.413 3.437 - 3.461: 98.7801% ( 1) 00:15:43.413 3.461 - 3.484: 98.8088% ( 4) 00:15:43.413 3.508 - 3.532: 98.8159% ( 1) 00:15:43.413 3.650 - 3.674: 98.8231% ( 1) 00:15:43.413 3.698 - 3.721: 98.8303% ( 1) 00:15:43.413 3.745 - 3.769: 98.8375% ( 1) 00:15:43.413 3.769 - 3.793: 98.8446% ( 1) 00:15:43.413 3.816 - 3.840: 98.8518% ( 1) 00:15:43.413 3.887 - 3.911: 98.8590% ( 1) 00:15:43.413 3.911 - 3.935: 98.8662% ( 1) 00:15:43.413 3.935 - 3.959: 98.8733% ( 1) 00:15:43.413 4.361 - 4.385: 98.8805% ( 1) 00:15:43.413 4.883 - 4.907: 98.8877% ( 1) 00:15:43.413 5.073 - 5.096: 98.8949% ( 1) 00:15:43.413 5.357 - 5.381: 98.9020% ( 1) 00:15:43.413 5.547 - 5.570: 98.9092% ( 1) 00:15:43.413 5.594 - 5.618: 98.9164% ( 1) 00:15:43.413 5.641 - 5.665: 98.9236% ( 1) 00:15:43.413 5.713 - 5.736: 98.9307% ( 1) 00:15:43.413 6.258 - 6.305: 98.9379% ( 1) 00:15:43.413 6.353 - 6.400: 98.9451% ( 1) 00:15:43.413 6.542 - 6.590: 98.9595% ( 2) 00:15:43.413 6.684 - 6.732: 98.9666% ( 1) 00:15:43.413 6.969 - 7.016: 98.9738% ( 1) 00:15:43.413 8.628 - 8.676: 98.9810% ( 1) 00:15:43.413 15.644 - 15.739: 99.0025% ( 3) 00:15:43.413 15.739 - 15.834: 99.0097% ( 1) 00:15:43.413 15.834 - 15.929: 99.0312% ( 3) 00:15:43.413 15.929 - 16.024: 99.0671% ( 5) 00:15:43.413 16.024 - 16.119: 99.0743% ( 1) 00:15:43.413 16.119 - 16.213: 99.0958% ( 3) 00:15:43.413 16.213 - 16.308: 99.1245% ( 4) 00:15:43.413 16.308 - 16.403: 99.1460% ( 3) 00:15:43.413 16.403 - 16.498: 99.1747% ( 4) 00:15:43.413 16.498 - 16.593: 99.2178% ( 6) 00:15:43.413 16.593 - 16.687: 99.2393% ( 3) 00:15:43.413 16.687 - 16.782: 99.2896% ( 7) 00:15:43.413 16.782 - 16.877: 99.3039% ( 2) 00:15:43.413 16.972 - 17.067: 99.3254% ( 3) 00:15:43.413 17.067 - 17.161: 99.3470% ( 3) 00:15:43.413 17.256 - 17.351: 99.3613% ( 2) 00:15:43.413 17.446 - 17.541: 99.3828% ( 3) 00:15:43.413 17.541 - 17.636: 99.3972% ( 2) 00:15:43.413 17.636 - 17.730: 99.4044% ( 1) 00:15:43.413 17.825 - 17.920: 99.4116% ( 1) 00:15:43.413 18.015 - 18.110: 99.4187% ( 1) 00:15:43.413 18.773 - 18.868: 99.4259% ( 1) 00:15:43.413 26.169 - 26.359: 99.4331% ( 1) 00:15:43.413 3980.705 - 4004.978: 99.8206% ( 54) 00:15:43.413 4004.978 - 4029.250: 99.9713% ( 21) 00:15:43.413 4029.250 - 4053.523: 99.9785% ( 1) 00:15:43.413 4053.523 - 4077.796: 99.9856% ( 1) 00:15:43.413 4077.796 - 4102.068: 99.9928% ( 1) 00:15:43.413 5000.154 - 5024.427: 100.0000% ( 1) 00:15:43.413 00:15:43.413 03:03:38 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:43.413 03:03:38 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:43.413 03:03:38 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:43.413 03:03:38 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:43.413 03:03:38 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:43.671 [ 00:15:43.671 { 00:15:43.671 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:43.671 "subtype": "Discovery", 00:15:43.671 "listen_addresses": [], 00:15:43.671 "allow_any_host": true, 00:15:43.672 "hosts": [] 00:15:43.672 }, 00:15:43.672 { 00:15:43.672 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:43.672 "subtype": "NVMe", 00:15:43.672 "listen_addresses": [ 00:15:43.672 { 00:15:43.672 "transport": "VFIOUSER", 00:15:43.672 "trtype": "VFIOUSER", 00:15:43.672 "adrfam": "IPv4", 00:15:43.672 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:43.672 "trsvcid": "0" 00:15:43.672 } 00:15:43.672 ], 00:15:43.672 "allow_any_host": true, 00:15:43.672 "hosts": [], 00:15:43.672 "serial_number": "SPDK1", 00:15:43.672 "model_number": "SPDK bdev Controller", 00:15:43.672 "max_namespaces": 32, 00:15:43.672 "min_cntlid": 1, 00:15:43.672 "max_cntlid": 65519, 00:15:43.672 "namespaces": [ 00:15:43.672 { 00:15:43.672 "nsid": 1, 00:15:43.672 "bdev_name": "Malloc1", 00:15:43.672 "name": "Malloc1", 00:15:43.672 "nguid": "EF5F0CC5C7C448B6811B594DA0766F16", 00:15:43.672 "uuid": "ef5f0cc5-c7c4-48b6-811b-594da0766f16" 00:15:43.672 }, 00:15:43.672 { 00:15:43.672 "nsid": 2, 00:15:43.672 "bdev_name": "Malloc3", 00:15:43.672 "name": "Malloc3", 00:15:43.672 "nguid": "081FC8C60603450D89CD8E4693A70117", 00:15:43.672 "uuid": "081fc8c6-0603-450d-89cd-8e4693a70117" 00:15:43.672 } 00:15:43.672 ] 00:15:43.672 }, 00:15:43.672 { 00:15:43.672 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:43.672 "subtype": "NVMe", 00:15:43.672 "listen_addresses": [ 00:15:43.672 { 00:15:43.672 "transport": "VFIOUSER", 00:15:43.672 "trtype": "VFIOUSER", 00:15:43.672 "adrfam": "IPv4", 00:15:43.672 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:43.672 "trsvcid": "0" 00:15:43.672 } 00:15:43.672 ], 00:15:43.672 "allow_any_host": true, 00:15:43.672 "hosts": [], 00:15:43.672 "serial_number": "SPDK2", 00:15:43.672 "model_number": "SPDK bdev Controller", 00:15:43.672 "max_namespaces": 32, 00:15:43.672 "min_cntlid": 1, 00:15:43.672 "max_cntlid": 65519, 00:15:43.672 "namespaces": [ 00:15:43.672 { 00:15:43.672 "nsid": 1, 00:15:43.672 "bdev_name": "Malloc2", 00:15:43.672 "name": "Malloc2", 00:15:43.672 "nguid": "BC2FF3DEAF464CDC95AA7979642C4FD1", 00:15:43.672 "uuid": "bc2ff3de-af46-4cdc-95aa-7979642c4fd1" 00:15:43.672 } 00:15:43.672 ] 00:15:43.672 } 00:15:43.672 ] 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@34 -- # aerpid=1980352 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:43.672 03:03:38 -- common/autotest_common.sh@1244 -- # local i=0 00:15:43.672 03:03:38 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.672 03:03:38 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:43.672 03:03:38 -- common/autotest_common.sh@1255 -- # return 0 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:43.672 03:03:38 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:43.931 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.190 Malloc4 00:15:44.190 03:03:39 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:44.450 03:03:39 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:44.450 Asynchronous Event Request test 00:15:44.450 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.450 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:44.450 Registering asynchronous event callbacks... 00:15:44.450 Starting namespace attribute notice tests for all controllers... 00:15:44.450 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:44.450 aer_cb - Changed Namespace 00:15:44.450 Cleaning up... 00:15:44.450 [ 00:15:44.450 { 00:15:44.450 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:44.450 "subtype": "Discovery", 00:15:44.450 "listen_addresses": [], 00:15:44.450 "allow_any_host": true, 00:15:44.450 "hosts": [] 00:15:44.450 }, 00:15:44.450 { 00:15:44.450 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:44.450 "subtype": "NVMe", 00:15:44.450 "listen_addresses": [ 00:15:44.450 { 00:15:44.450 "transport": "VFIOUSER", 00:15:44.450 "trtype": "VFIOUSER", 00:15:44.450 "adrfam": "IPv4", 00:15:44.450 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:44.450 "trsvcid": "0" 00:15:44.450 } 00:15:44.450 ], 00:15:44.450 "allow_any_host": true, 00:15:44.450 "hosts": [], 00:15:44.450 "serial_number": "SPDK1", 00:15:44.450 "model_number": "SPDK bdev Controller", 00:15:44.450 "max_namespaces": 32, 00:15:44.450 "min_cntlid": 1, 00:15:44.450 "max_cntlid": 65519, 00:15:44.450 "namespaces": [ 00:15:44.450 { 00:15:44.450 "nsid": 1, 00:15:44.450 "bdev_name": "Malloc1", 00:15:44.450 "name": "Malloc1", 00:15:44.450 "nguid": "EF5F0CC5C7C448B6811B594DA0766F16", 00:15:44.450 "uuid": "ef5f0cc5-c7c4-48b6-811b-594da0766f16" 00:15:44.450 }, 00:15:44.450 { 00:15:44.450 "nsid": 2, 00:15:44.450 "bdev_name": "Malloc3", 00:15:44.450 "name": "Malloc3", 00:15:44.450 "nguid": "081FC8C60603450D89CD8E4693A70117", 00:15:44.450 "uuid": "081fc8c6-0603-450d-89cd-8e4693a70117" 00:15:44.450 } 00:15:44.450 ] 00:15:44.450 }, 00:15:44.450 { 00:15:44.450 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:44.450 "subtype": "NVMe", 00:15:44.450 "listen_addresses": [ 00:15:44.450 { 00:15:44.450 "transport": "VFIOUSER", 00:15:44.450 "trtype": "VFIOUSER", 00:15:44.450 "adrfam": "IPv4", 00:15:44.450 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:44.450 "trsvcid": "0" 00:15:44.450 } 00:15:44.450 ], 00:15:44.450 "allow_any_host": true, 00:15:44.450 "hosts": [], 00:15:44.450 "serial_number": "SPDK2", 00:15:44.450 "model_number": "SPDK bdev Controller", 00:15:44.450 "max_namespaces": 32, 00:15:44.450 "min_cntlid": 1, 00:15:44.450 "max_cntlid": 65519, 00:15:44.450 "namespaces": [ 00:15:44.450 { 00:15:44.450 "nsid": 1, 00:15:44.450 "bdev_name": "Malloc2", 00:15:44.450 "name": "Malloc2", 00:15:44.450 "nguid": "BC2FF3DEAF464CDC95AA7979642C4FD1", 00:15:44.450 "uuid": "bc2ff3de-af46-4cdc-95aa-7979642c4fd1" 00:15:44.450 }, 00:15:44.450 { 00:15:44.450 "nsid": 2, 00:15:44.450 "bdev_name": "Malloc4", 00:15:44.450 "name": "Malloc4", 00:15:44.450 "nguid": "F7053444E43D40A5A06C8CDFA3FC5A0D", 00:15:44.450 "uuid": "f7053444-e43d-40a5-a06c-8cdfa3fc5a0d" 00:15:44.450 } 00:15:44.450 ] 00:15:44.450 } 00:15:44.450 ] 00:15:44.450 03:03:39 -- target/nvmf_vfio_user.sh@44 -- # wait 1980352 00:15:44.450 03:03:39 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:44.450 03:03:39 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1973950 00:15:44.450 03:03:39 -- common/autotest_common.sh@926 -- # '[' -z 1973950 ']' 00:15:44.450 03:03:39 -- common/autotest_common.sh@930 -- # kill -0 1973950 00:15:44.450 03:03:39 -- common/autotest_common.sh@931 -- # uname 00:15:44.450 03:03:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.450 03:03:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1973950 00:15:44.709 03:03:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.709 03:03:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.710 03:03:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1973950' 00:15:44.710 killing process with pid 1973950 00:15:44.710 03:03:39 -- common/autotest_common.sh@945 -- # kill 1973950 00:15:44.710 [2024-07-14 03:03:39.713189] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:15:44.710 03:03:39 -- common/autotest_common.sh@950 -- # wait 1973950 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1980498 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1980498' 00:15:44.969 Process pid: 1980498 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.969 03:03:40 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1980498 00:15:44.969 03:03:40 -- common/autotest_common.sh@819 -- # '[' -z 1980498 ']' 00:15:44.969 03:03:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.969 03:03:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:44.969 03:03:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.969 03:03:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:44.969 03:03:40 -- common/autotest_common.sh@10 -- # set +x 00:15:44.969 [2024-07-14 03:03:40.067637] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:44.969 [2024-07-14 03:03:40.068640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:44.969 [2024-07-14 03:03:40.068696] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:44.969 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.969 [2024-07-14 03:03:40.128158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:44.969 [2024-07-14 03:03:40.215529] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:44.969 [2024-07-14 03:03:40.215669] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:44.969 [2024-07-14 03:03:40.215685] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:44.969 [2024-07-14 03:03:40.215697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:44.969 [2024-07-14 03:03:40.215751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:44.969 [2024-07-14 03:03:40.215811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:44.969 [2024-07-14 03:03:40.215884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:44.969 [2024-07-14 03:03:40.215889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:45.229 [2024-07-14 03:03:40.313240] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:15:45.229 [2024-07-14 03:03:40.313486] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:15:45.229 [2024-07-14 03:03:40.313761] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:15:45.229 [2024-07-14 03:03:40.314537] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:45.229 [2024-07-14 03:03:40.314653] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:15:45.798 03:03:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:45.798 03:03:41 -- common/autotest_common.sh@852 -- # return 0 00:15:45.798 03:03:41 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:47.177 03:03:42 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:47.435 Malloc1 00:15:47.435 03:03:42 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:47.694 03:03:42 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:47.952 03:03:43 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:48.210 03:03:43 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:48.210 03:03:43 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:48.210 03:03:43 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:48.467 Malloc2 00:15:48.467 03:03:43 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:48.724 03:03:43 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:48.982 03:03:44 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:49.293 03:03:44 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:49.293 03:03:44 -- target/nvmf_vfio_user.sh@95 -- # killprocess 1980498 00:15:49.293 03:03:44 -- common/autotest_common.sh@926 -- # '[' -z 1980498 ']' 00:15:49.293 03:03:44 -- common/autotest_common.sh@930 -- # kill -0 1980498 00:15:49.293 03:03:44 -- common/autotest_common.sh@931 -- # uname 00:15:49.293 03:03:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:49.293 03:03:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1980498 00:15:49.293 03:03:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:49.293 03:03:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:49.293 03:03:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1980498' 00:15:49.293 killing process with pid 1980498 00:15:49.293 03:03:44 -- common/autotest_common.sh@945 -- # kill 1980498 00:15:49.293 03:03:44 -- common/autotest_common.sh@950 -- # wait 1980498 00:15:49.552 03:03:44 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:49.552 03:03:44 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:49.552 00:15:49.552 real 0m53.608s 00:15:49.552 user 3m31.888s 00:15:49.552 sys 0m4.591s 00:15:49.552 03:03:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:49.552 03:03:44 -- common/autotest_common.sh@10 -- # set +x 00:15:49.552 ************************************ 00:15:49.552 END TEST nvmf_vfio_user 00:15:49.552 ************************************ 00:15:49.552 03:03:44 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:49.552 03:03:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:49.552 03:03:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:49.552 03:03:44 -- common/autotest_common.sh@10 -- # set +x 00:15:49.552 ************************************ 00:15:49.552 START TEST nvmf_vfio_user_nvme_compliance 00:15:49.552 ************************************ 00:15:49.552 03:03:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:49.811 * Looking for test storage... 00:15:49.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:49.811 03:03:44 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:49.812 03:03:44 -- nvmf/common.sh@7 -- # uname -s 00:15:49.812 03:03:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:49.812 03:03:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:49.812 03:03:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:49.812 03:03:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:49.812 03:03:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:49.812 03:03:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:49.812 03:03:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:49.812 03:03:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:49.812 03:03:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:49.812 03:03:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:49.812 03:03:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.812 03:03:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.812 03:03:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:49.812 03:03:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:49.812 03:03:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:49.812 03:03:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:49.812 03:03:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:49.812 03:03:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:49.812 03:03:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:49.812 03:03:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.812 03:03:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.812 03:03:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.812 03:03:44 -- paths/export.sh@5 -- # export PATH 00:15:49.812 03:03:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:49.812 03:03:44 -- nvmf/common.sh@46 -- # : 0 00:15:49.812 03:03:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:49.812 03:03:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:49.812 03:03:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:49.812 03:03:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:49.812 03:03:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:49.812 03:03:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:49.812 03:03:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:49.812 03:03:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:49.812 03:03:44 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:49.812 03:03:44 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:49.812 03:03:44 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:49.812 03:03:44 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:49.812 03:03:44 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:49.812 03:03:44 -- compliance/compliance.sh@20 -- # nvmfpid=1981240 00:15:49.812 03:03:44 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:49.812 03:03:44 -- compliance/compliance.sh@21 -- # echo 'Process pid: 1981240' 00:15:49.812 Process pid: 1981240 00:15:49.812 03:03:44 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:49.812 03:03:44 -- compliance/compliance.sh@24 -- # waitforlisten 1981240 00:15:49.812 03:03:44 -- common/autotest_common.sh@819 -- # '[' -z 1981240 ']' 00:15:49.812 03:03:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.812 03:03:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:49.812 03:03:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.812 03:03:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:49.812 03:03:44 -- common/autotest_common.sh@10 -- # set +x 00:15:49.812 [2024-07-14 03:03:44.874113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:49.812 [2024-07-14 03:03:44.874201] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.812 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.812 [2024-07-14 03:03:44.932052] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.812 [2024-07-14 03:03:45.014328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:49.812 [2024-07-14 03:03:45.014487] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.812 [2024-07-14 03:03:45.014505] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.812 [2024-07-14 03:03:45.014517] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.812 [2024-07-14 03:03:45.014571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.812 [2024-07-14 03:03:45.014620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.812 [2024-07-14 03:03:45.014624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.750 03:03:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:50.750 03:03:45 -- common/autotest_common.sh@852 -- # return 0 00:15:50.750 03:03:45 -- compliance/compliance.sh@26 -- # sleep 1 00:15:51.689 03:03:46 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:51.689 03:03:46 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:51.689 03:03:46 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:51.689 03:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.689 03:03:46 -- common/autotest_common.sh@10 -- # set +x 00:15:51.689 03:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.689 03:03:46 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:51.689 03:03:46 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:51.689 03:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.689 03:03:46 -- common/autotest_common.sh@10 -- # set +x 00:15:51.689 malloc0 00:15:51.689 03:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.689 03:03:46 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:51.689 03:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.689 03:03:46 -- common/autotest_common.sh@10 -- # set +x 00:15:51.689 03:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.689 03:03:46 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:51.689 03:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.689 03:03:46 -- common/autotest_common.sh@10 -- # set +x 00:15:51.689 03:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.689 03:03:46 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:51.689 03:03:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:51.689 03:03:46 -- common/autotest_common.sh@10 -- # set +x 00:15:51.689 03:03:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:51.689 03:03:46 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:51.949 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.949 00:15:51.949 00:15:51.949 CUnit - A unit testing framework for C - Version 2.1-3 00:15:51.949 http://cunit.sourceforge.net/ 00:15:51.949 00:15:51.949 00:15:51.949 Suite: nvme_compliance 00:15:51.949 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-14 03:03:47.051759] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:51.949 [2024-07-14 03:03:47.051801] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:51.949 [2024-07-14 03:03:47.051829] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:51.949 passed 00:15:51.949 Test: admin_identify_ctrlr_verify_fused ...passed 00:15:52.209 Test: admin_identify_ns ...[2024-07-14 03:03:47.289898] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:52.209 [2024-07-14 03:03:47.297897] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:52.209 passed 00:15:52.209 Test: admin_get_features_mandatory_features ...passed 00:15:52.469 Test: admin_get_features_optional_features ...passed 00:15:52.469 Test: admin_set_features_number_of_queues ...passed 00:15:52.727 Test: admin_get_log_page_mandatory_logs ...passed 00:15:52.727 Test: admin_get_log_page_with_lpo ...[2024-07-14 03:03:47.918879] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:52.727 passed 00:15:52.986 Test: fabric_property_get ...passed 00:15:52.986 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-14 03:03:48.104139] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:52.986 passed 00:15:53.244 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-14 03:03:48.276880] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.244 [2024-07-14 03:03:48.292880] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.244 passed 00:15:53.244 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-14 03:03:48.380898] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:53.244 passed 00:15:53.502 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-14 03:03:48.539873] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:53.502 [2024-07-14 03:03:48.563873] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:53.502 passed 00:15:53.502 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-14 03:03:48.653374] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:53.502 [2024-07-14 03:03:48.653430] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:53.502 passed 00:15:53.761 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-14 03:03:48.831891] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:53.761 [2024-07-14 03:03:48.839878] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:53.761 [2024-07-14 03:03:48.847890] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:53.762 [2024-07-14 03:03:48.855891] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:53.762 passed 00:15:53.762 Test: admin_create_io_sq_verify_pc ...[2024-07-14 03:03:48.984890] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:54.020 passed 00:15:54.958 Test: admin_create_io_qp_max_qps ...[2024-07-14 03:03:50.182883] nvme_ctrlr.c:5318:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:55.528 passed 00:15:55.787 Test: admin_create_io_sq_shared_cq ...[2024-07-14 03:03:50.794874] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:55.787 passed 00:15:55.787 00:15:55.787 Run Summary: Type Total Ran Passed Failed Inactive 00:15:55.787 suites 1 1 n/a 0 0 00:15:55.787 tests 18 18 18 0 0 00:15:55.787 asserts 360 360 360 0 n/a 00:15:55.787 00:15:55.787 Elapsed time = 1.569 seconds 00:15:55.787 03:03:50 -- compliance/compliance.sh@42 -- # killprocess 1981240 00:15:55.787 03:03:50 -- common/autotest_common.sh@926 -- # '[' -z 1981240 ']' 00:15:55.787 03:03:50 -- common/autotest_common.sh@930 -- # kill -0 1981240 00:15:55.787 03:03:50 -- common/autotest_common.sh@931 -- # uname 00:15:55.787 03:03:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:55.787 03:03:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1981240 00:15:55.787 03:03:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:55.787 03:03:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:55.787 03:03:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1981240' 00:15:55.787 killing process with pid 1981240 00:15:55.787 03:03:50 -- common/autotest_common.sh@945 -- # kill 1981240 00:15:55.787 03:03:50 -- common/autotest_common.sh@950 -- # wait 1981240 00:15:56.047 03:03:51 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:56.047 03:03:51 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:56.047 00:15:56.047 real 0m6.392s 00:15:56.047 user 0m18.347s 00:15:56.047 sys 0m0.584s 00:15:56.047 03:03:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.047 03:03:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.047 ************************************ 00:15:56.047 END TEST nvmf_vfio_user_nvme_compliance 00:15:56.047 ************************************ 00:15:56.047 03:03:51 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.047 03:03:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:56.047 03:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:56.047 03:03:51 -- common/autotest_common.sh@10 -- # set +x 00:15:56.047 ************************************ 00:15:56.047 START TEST nvmf_vfio_user_fuzz 00:15:56.047 ************************************ 00:15:56.047 03:03:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:56.047 * Looking for test storage... 00:15:56.047 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:56.047 03:03:51 -- nvmf/common.sh@7 -- # uname -s 00:15:56.047 03:03:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:56.047 03:03:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:56.047 03:03:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:56.047 03:03:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:56.047 03:03:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:56.047 03:03:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:56.047 03:03:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:56.047 03:03:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:56.047 03:03:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:56.047 03:03:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:56.047 03:03:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.047 03:03:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.047 03:03:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:56.047 03:03:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:56.047 03:03:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:56.047 03:03:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:56.047 03:03:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:56.047 03:03:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:56.047 03:03:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:56.047 03:03:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.047 03:03:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.047 03:03:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.047 03:03:51 -- paths/export.sh@5 -- # export PATH 00:15:56.047 03:03:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:56.047 03:03:51 -- nvmf/common.sh@46 -- # : 0 00:15:56.047 03:03:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:56.047 03:03:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:56.047 03:03:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:56.047 03:03:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:56.047 03:03:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:56.047 03:03:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:56.047 03:03:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:56.047 03:03:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1981991 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1981991' 00:15:56.047 Process pid: 1981991 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:56.047 03:03:51 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1981991 00:15:56.047 03:03:51 -- common/autotest_common.sh@819 -- # '[' -z 1981991 ']' 00:15:56.047 03:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.047 03:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:56.047 03:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.047 03:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:56.047 03:03:51 -- common/autotest_common.sh@10 -- # set +x 00:15:57.426 03:03:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:57.426 03:03:52 -- common/autotest_common.sh@852 -- # return 0 00:15:57.426 03:03:52 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:58.364 03:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.364 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 03:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:58.364 03:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.364 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 malloc0 00:15:58.364 03:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:58.364 03:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.364 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 03:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:58.364 03:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.364 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 03:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:58.364 03:03:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:58.364 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:15:58.364 03:03:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:58.364 03:03:53 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:30.448 Fuzzing completed. Shutting down the fuzz application 00:16:30.448 00:16:30.448 Dumping successful admin opcodes: 00:16:30.448 8, 9, 10, 24, 00:16:30.448 Dumping successful io opcodes: 00:16:30.448 0, 00:16:30.448 NS: 0x200003a1ef00 I/O qp, Total commands completed: 580342, total successful commands: 2229, random_seed: 3575096256 00:16:30.448 NS: 0x200003a1ef00 admin qp, Total commands completed: 143680, total successful commands: 1168, random_seed: 1116320000 00:16:30.448 03:04:23 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:30.448 03:04:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.448 03:04:23 -- common/autotest_common.sh@10 -- # set +x 00:16:30.448 03:04:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.448 03:04:23 -- target/vfio_user_fuzz.sh@46 -- # killprocess 1981991 00:16:30.448 03:04:23 -- common/autotest_common.sh@926 -- # '[' -z 1981991 ']' 00:16:30.448 03:04:23 -- common/autotest_common.sh@930 -- # kill -0 1981991 00:16:30.448 03:04:23 -- common/autotest_common.sh@931 -- # uname 00:16:30.448 03:04:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.448 03:04:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1981991 00:16:30.448 03:04:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:30.448 03:04:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:30.448 03:04:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1981991' 00:16:30.448 killing process with pid 1981991 00:16:30.448 03:04:23 -- common/autotest_common.sh@945 -- # kill 1981991 00:16:30.448 03:04:23 -- common/autotest_common.sh@950 -- # wait 1981991 00:16:30.448 03:04:24 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:30.448 03:04:24 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:30.448 00:16:30.448 real 0m32.961s 00:16:30.448 user 0m34.377s 00:16:30.448 sys 0m26.238s 00:16:30.448 03:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.448 03:04:24 -- common/autotest_common.sh@10 -- # set +x 00:16:30.448 ************************************ 00:16:30.448 END TEST nvmf_vfio_user_fuzz 00:16:30.448 ************************************ 00:16:30.449 03:04:24 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:30.449 03:04:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.449 03:04:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.449 03:04:24 -- common/autotest_common.sh@10 -- # set +x 00:16:30.449 ************************************ 00:16:30.449 START TEST nvmf_host_management 00:16:30.449 ************************************ 00:16:30.449 03:04:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:30.449 * Looking for test storage... 00:16:30.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.449 03:04:24 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.449 03:04:24 -- nvmf/common.sh@7 -- # uname -s 00:16:30.449 03:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.449 03:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.449 03:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.449 03:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.449 03:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.449 03:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.449 03:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.449 03:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.449 03:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.449 03:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.449 03:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.449 03:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.449 03:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.449 03:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.449 03:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.449 03:04:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.449 03:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.449 03:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.449 03:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.449 03:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.449 03:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.449 03:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.449 03:04:24 -- paths/export.sh@5 -- # export PATH 00:16:30.449 03:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.449 03:04:24 -- nvmf/common.sh@46 -- # : 0 00:16:30.449 03:04:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.449 03:04:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.449 03:04:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.449 03:04:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.449 03:04:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.449 03:04:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.449 03:04:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.449 03:04:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.449 03:04:24 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.449 03:04:24 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.449 03:04:24 -- target/host_management.sh@104 -- # nvmftestinit 00:16:30.449 03:04:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.449 03:04:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.449 03:04:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.449 03:04:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.449 03:04:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.449 03:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.449 03:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.449 03:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.449 03:04:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:30.449 03:04:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:30.449 03:04:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:30.449 03:04:24 -- common/autotest_common.sh@10 -- # set +x 00:16:31.083 03:04:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:31.083 03:04:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:31.083 03:04:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:31.083 03:04:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:31.083 03:04:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:31.083 03:04:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:31.083 03:04:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:31.083 03:04:26 -- nvmf/common.sh@294 -- # net_devs=() 00:16:31.083 03:04:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:31.083 03:04:26 -- nvmf/common.sh@295 -- # e810=() 00:16:31.083 03:04:26 -- nvmf/common.sh@295 -- # local -ga e810 00:16:31.083 03:04:26 -- nvmf/common.sh@296 -- # x722=() 00:16:31.083 03:04:26 -- nvmf/common.sh@296 -- # local -ga x722 00:16:31.083 03:04:26 -- nvmf/common.sh@297 -- # mlx=() 00:16:31.083 03:04:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:31.083 03:04:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.084 03:04:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.084 03:04:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:31.084 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:31.084 03:04:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:31.084 03:04:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:31.084 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:31.084 03:04:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.084 03:04:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.084 03:04:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.084 03:04:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:31.084 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:31.084 03:04:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:31.084 03:04:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.084 03:04:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.084 03:04:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:31.084 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:31.084 03:04:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:31.084 03:04:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:31.084 03:04:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.084 03:04:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.084 03:04:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:31.084 03:04:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.084 03:04:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.084 03:04:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:31.084 03:04:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.084 03:04:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.084 03:04:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:31.084 03:04:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:31.084 03:04:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.084 03:04:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.084 03:04:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.084 03:04:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.084 03:04:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:31.084 03:04:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.084 03:04:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.084 03:04:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.084 03:04:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:31.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:16:31.084 00:16:31.084 --- 10.0.0.2 ping statistics --- 00:16:31.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.084 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:31.084 03:04:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:16:31.084 00:16:31.084 --- 10.0.0.1 ping statistics --- 00:16:31.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.084 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:16:31.084 03:04:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.084 03:04:26 -- nvmf/common.sh@410 -- # return 0 00:16:31.084 03:04:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.084 03:04:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.084 03:04:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.084 03:04:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.084 03:04:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.084 03:04:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.084 03:04:26 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:31.084 03:04:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:31.084 03:04:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:31.084 03:04:26 -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 ************************************ 00:16:31.084 START TEST nvmf_host_management 00:16:31.084 ************************************ 00:16:31.084 03:04:26 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:31.084 03:04:26 -- target/host_management.sh@69 -- # starttarget 00:16:31.084 03:04:26 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:31.084 03:04:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.084 03:04:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:31.084 03:04:26 -- common/autotest_common.sh@10 -- # set +x 00:16:31.084 03:04:26 -- nvmf/common.sh@469 -- # nvmfpid=1987689 00:16:31.084 03:04:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:31.084 03:04:26 -- nvmf/common.sh@470 -- # waitforlisten 1987689 00:16:31.084 03:04:26 -- common/autotest_common.sh@819 -- # '[' -z 1987689 ']' 00:16:31.084 03:04:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.084 03:04:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.084 03:04:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.084 03:04:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.084 03:04:26 -- common/autotest_common.sh@10 -- # set +x 00:16:31.345 [2024-07-14 03:04:26.363872] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:31.345 [2024-07-14 03:04:26.363960] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.345 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.345 [2024-07-14 03:04:26.429266] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:31.345 [2024-07-14 03:04:26.515949] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.345 [2024-07-14 03:04:26.516101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.345 [2024-07-14 03:04:26.516120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.345 [2024-07-14 03:04:26.516133] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.345 [2024-07-14 03:04:26.516282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.345 [2024-07-14 03:04:26.516349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.345 [2024-07-14 03:04:26.516418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:31.345 [2024-07-14 03:04:26.516420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.282 03:04:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.282 03:04:27 -- common/autotest_common.sh@852 -- # return 0 00:16:32.282 03:04:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.282 03:04:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.282 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.282 03:04:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.282 03:04:27 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:32.282 03:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.282 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.282 [2024-07-14 03:04:27.324488] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.282 03:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.282 03:04:27 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:32.282 03:04:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:32.282 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.282 03:04:27 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:32.282 03:04:27 -- target/host_management.sh@23 -- # cat 00:16:32.282 03:04:27 -- target/host_management.sh@30 -- # rpc_cmd 00:16:32.282 03:04:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.282 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.282 Malloc0 00:16:32.282 [2024-07-14 03:04:27.385690] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.282 03:04:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.282 03:04:27 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:32.282 03:04:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.282 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.282 03:04:27 -- target/host_management.sh@73 -- # perfpid=1987865 00:16:32.282 03:04:27 -- target/host_management.sh@74 -- # waitforlisten 1987865 /var/tmp/bdevperf.sock 00:16:32.282 03:04:27 -- common/autotest_common.sh@819 -- # '[' -z 1987865 ']' 00:16:32.282 03:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:32.282 03:04:27 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:32.282 03:04:27 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:32.282 03:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.282 03:04:27 -- nvmf/common.sh@520 -- # config=() 00:16:32.282 03:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:32.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:32.282 03:04:27 -- nvmf/common.sh@520 -- # local subsystem config 00:16:32.282 03:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.283 03:04:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:32.283 03:04:27 -- common/autotest_common.sh@10 -- # set +x 00:16:32.283 03:04:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:32.283 { 00:16:32.283 "params": { 00:16:32.283 "name": "Nvme$subsystem", 00:16:32.283 "trtype": "$TEST_TRANSPORT", 00:16:32.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:32.283 "adrfam": "ipv4", 00:16:32.283 "trsvcid": "$NVMF_PORT", 00:16:32.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:32.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:32.283 "hdgst": ${hdgst:-false}, 00:16:32.283 "ddgst": ${ddgst:-false} 00:16:32.283 }, 00:16:32.283 "method": "bdev_nvme_attach_controller" 00:16:32.283 } 00:16:32.283 EOF 00:16:32.283 )") 00:16:32.283 03:04:27 -- nvmf/common.sh@542 -- # cat 00:16:32.283 03:04:27 -- nvmf/common.sh@544 -- # jq . 00:16:32.283 03:04:27 -- nvmf/common.sh@545 -- # IFS=, 00:16:32.283 03:04:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:32.283 "params": { 00:16:32.283 "name": "Nvme0", 00:16:32.283 "trtype": "tcp", 00:16:32.283 "traddr": "10.0.0.2", 00:16:32.283 "adrfam": "ipv4", 00:16:32.283 "trsvcid": "4420", 00:16:32.283 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:32.283 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:32.283 "hdgst": false, 00:16:32.283 "ddgst": false 00:16:32.283 }, 00:16:32.283 "method": "bdev_nvme_attach_controller" 00:16:32.283 }' 00:16:32.283 [2024-07-14 03:04:27.461965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:32.283 [2024-07-14 03:04:27.462053] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1987865 ] 00:16:32.283 EAL: No free 2048 kB hugepages reported on node 1 00:16:32.283 [2024-07-14 03:04:27.523753] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.541 [2024-07-14 03:04:27.609114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.799 Running I/O for 10 seconds... 00:16:33.369 03:04:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:33.369 03:04:28 -- common/autotest_common.sh@852 -- # return 0 00:16:33.369 03:04:28 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:33.369 03:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.369 03:04:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.369 03:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.369 03:04:28 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:33.369 03:04:28 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:33.369 03:04:28 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:33.369 03:04:28 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:33.369 03:04:28 -- target/host_management.sh@52 -- # local ret=1 00:16:33.369 03:04:28 -- target/host_management.sh@53 -- # local i 00:16:33.369 03:04:28 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:33.369 03:04:28 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:33.369 03:04:28 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:33.369 03:04:28 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:33.369 03:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.369 03:04:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.369 03:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.369 03:04:28 -- target/host_management.sh@55 -- # read_io_count=1772 00:16:33.369 03:04:28 -- target/host_management.sh@58 -- # '[' 1772 -ge 100 ']' 00:16:33.369 03:04:28 -- target/host_management.sh@59 -- # ret=0 00:16:33.369 03:04:28 -- target/host_management.sh@60 -- # break 00:16:33.369 03:04:28 -- target/host_management.sh@64 -- # return 0 00:16:33.369 03:04:28 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:33.369 03:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.369 03:04:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.369 [2024-07-14 03:04:28.485485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.485999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486222] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486246] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486551] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486564] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.486829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1294370 is same with the state(5) to be set 00:16:33.369 [2024-07-14 03:04:28.487061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.369 [2024-07-14 03:04:28.487101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.369 [2024-07-14 03:04:28.487129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.369 [2024-07-14 03:04:28.487147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.369 [2024-07-14 03:04:28.487165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:111488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.369 [2024-07-14 03:04:28.487198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:111872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:112256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:112768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.487978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.487992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:114048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:114560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.370 [2024-07-14 03:04:28.488644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.370 [2024-07-14 03:04:28.488660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:114816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:114944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:115072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:115200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:115584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.488980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:115840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.488997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:110208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:115968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:116096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:110720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:110848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:116352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:116480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:33.371 [2024-07-14 03:04:28.489349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.371 [2024-07-14 03:04:28.489365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e1080 is same with the state(5) to be set 00:16:33.371 [2024-07-14 03:04:28.489443] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6e1080 was disconnected and freed. reset controller. 00:16:33.371 03:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.371 03:04:28 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:33.371 03:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.371 03:04:28 -- common/autotest_common.sh@10 -- # set +x 00:16:33.371 [2024-07-14 03:04:28.490618] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:33.371 task offset: 111232 on job bdev=Nvme0n1 fails 00:16:33.371 00:16:33.371 Latency(us) 00:16:33.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:33.371 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:33.371 Job: Nvme0n1 ended in about 0.66 seconds with error 00:16:33.371 Verification LBA range: start 0x0 length 0x400 00:16:33.371 Nvme0n1 : 0.66 2815.74 175.98 96.26 0.00 21704.53 4004.98 28544.57 00:16:33.371 =================================================================================================================== 00:16:33.371 Total : 2815.74 175.98 96.26 0.00 21704.53 4004.98 28544.57 00:16:33.371 [2024-07-14 03:04:28.492568] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:33.371 [2024-07-14 03:04:28.492599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e6c20 (9): Bad file descriptor 00:16:33.371 03:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.371 03:04:28 -- target/host_management.sh@87 -- # sleep 1 00:16:33.371 [2024-07-14 03:04:28.502721] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:34.308 03:04:29 -- target/host_management.sh@91 -- # kill -9 1987865 00:16:34.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1987865) - No such process 00:16:34.308 03:04:29 -- target/host_management.sh@91 -- # true 00:16:34.308 03:04:29 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:34.308 03:04:29 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:34.308 03:04:29 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:34.308 03:04:29 -- nvmf/common.sh@520 -- # config=() 00:16:34.308 03:04:29 -- nvmf/common.sh@520 -- # local subsystem config 00:16:34.308 03:04:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:34.308 03:04:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:34.308 { 00:16:34.308 "params": { 00:16:34.308 "name": "Nvme$subsystem", 00:16:34.308 "trtype": "$TEST_TRANSPORT", 00:16:34.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:34.308 "adrfam": "ipv4", 00:16:34.308 "trsvcid": "$NVMF_PORT", 00:16:34.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:34.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:34.308 "hdgst": ${hdgst:-false}, 00:16:34.308 "ddgst": ${ddgst:-false} 00:16:34.308 }, 00:16:34.308 "method": "bdev_nvme_attach_controller" 00:16:34.308 } 00:16:34.308 EOF 00:16:34.308 )") 00:16:34.308 03:04:29 -- nvmf/common.sh@542 -- # cat 00:16:34.308 03:04:29 -- nvmf/common.sh@544 -- # jq . 00:16:34.308 03:04:29 -- nvmf/common.sh@545 -- # IFS=, 00:16:34.308 03:04:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:34.308 "params": { 00:16:34.308 "name": "Nvme0", 00:16:34.308 "trtype": "tcp", 00:16:34.308 "traddr": "10.0.0.2", 00:16:34.308 "adrfam": "ipv4", 00:16:34.308 "trsvcid": "4420", 00:16:34.308 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:34.308 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:34.308 "hdgst": false, 00:16:34.308 "ddgst": false 00:16:34.308 }, 00:16:34.308 "method": "bdev_nvme_attach_controller" 00:16:34.308 }' 00:16:34.308 [2024-07-14 03:04:29.543754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:34.308 [2024-07-14 03:04:29.543846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1988153 ] 00:16:34.567 EAL: No free 2048 kB hugepages reported on node 1 00:16:34.567 [2024-07-14 03:04:29.605870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.567 [2024-07-14 03:04:29.692131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.828 Running I/O for 1 seconds... 00:16:36.205 00:16:36.205 Latency(us) 00:16:36.205 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.205 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.205 Verification LBA range: start 0x0 length 0x400 00:16:36.205 Nvme0n1 : 1.01 2959.32 184.96 0.00 0.00 21326.66 1978.22 27379.48 00:16:36.205 =================================================================================================================== 00:16:36.206 Total : 2959.32 184.96 0.00 0.00 21326.66 1978.22 27379.48 00:16:36.206 03:04:31 -- target/host_management.sh@101 -- # stoptarget 00:16:36.206 03:04:31 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:36.206 03:04:31 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:36.206 03:04:31 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:36.206 03:04:31 -- target/host_management.sh@40 -- # nvmftestfini 00:16:36.206 03:04:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:36.206 03:04:31 -- nvmf/common.sh@116 -- # sync 00:16:36.206 03:04:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:36.206 03:04:31 -- nvmf/common.sh@119 -- # set +e 00:16:36.206 03:04:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:36.206 03:04:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:36.206 rmmod nvme_tcp 00:16:36.206 rmmod nvme_fabrics 00:16:36.206 rmmod nvme_keyring 00:16:36.206 03:04:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:36.206 03:04:31 -- nvmf/common.sh@123 -- # set -e 00:16:36.206 03:04:31 -- nvmf/common.sh@124 -- # return 0 00:16:36.206 03:04:31 -- nvmf/common.sh@477 -- # '[' -n 1987689 ']' 00:16:36.206 03:04:31 -- nvmf/common.sh@478 -- # killprocess 1987689 00:16:36.206 03:04:31 -- common/autotest_common.sh@926 -- # '[' -z 1987689 ']' 00:16:36.206 03:04:31 -- common/autotest_common.sh@930 -- # kill -0 1987689 00:16:36.206 03:04:31 -- common/autotest_common.sh@931 -- # uname 00:16:36.206 03:04:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:36.206 03:04:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1987689 00:16:36.206 03:04:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:36.206 03:04:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:36.206 03:04:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1987689' 00:16:36.206 killing process with pid 1987689 00:16:36.206 03:04:31 -- common/autotest_common.sh@945 -- # kill 1987689 00:16:36.206 03:04:31 -- common/autotest_common.sh@950 -- # wait 1987689 00:16:36.465 [2024-07-14 03:04:31.582250] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:36.465 03:04:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:36.465 03:04:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:36.465 03:04:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:36.465 03:04:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:36.465 03:04:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:36.465 03:04:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:36.465 03:04:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:36.465 03:04:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.020 03:04:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:39.020 00:16:39.020 real 0m7.337s 00:16:39.020 user 0m21.725s 00:16:39.020 sys 0m1.665s 00:16:39.020 03:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.020 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:16:39.020 ************************************ 00:16:39.020 END TEST nvmf_host_management 00:16:39.020 ************************************ 00:16:39.020 03:04:33 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:39.020 00:16:39.020 real 0m9.515s 00:16:39.020 user 0m22.473s 00:16:39.020 sys 0m3.121s 00:16:39.020 03:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:39.020 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:16:39.020 ************************************ 00:16:39.020 END TEST nvmf_host_management 00:16:39.020 ************************************ 00:16:39.020 03:04:33 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:39.020 03:04:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:39.020 03:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:39.020 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:16:39.020 ************************************ 00:16:39.020 START TEST nvmf_lvol 00:16:39.020 ************************************ 00:16:39.020 03:04:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:39.020 * Looking for test storage... 00:16:39.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:39.020 03:04:33 -- nvmf/common.sh@7 -- # uname -s 00:16:39.020 03:04:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:39.020 03:04:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:39.020 03:04:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:39.020 03:04:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:39.020 03:04:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:39.020 03:04:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:39.020 03:04:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:39.020 03:04:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:39.020 03:04:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:39.020 03:04:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:39.020 03:04:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.020 03:04:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.020 03:04:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:39.020 03:04:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:39.020 03:04:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:39.020 03:04:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:39.020 03:04:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:39.020 03:04:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:39.020 03:04:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:39.020 03:04:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.020 03:04:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.020 03:04:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.020 03:04:33 -- paths/export.sh@5 -- # export PATH 00:16:39.020 03:04:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:39.020 03:04:33 -- nvmf/common.sh@46 -- # : 0 00:16:39.020 03:04:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:39.020 03:04:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:39.020 03:04:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:39.020 03:04:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:39.020 03:04:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:39.020 03:04:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:39.020 03:04:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:39.020 03:04:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:39.020 03:04:33 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:39.020 03:04:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:39.020 03:04:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:39.020 03:04:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:39.020 03:04:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:39.020 03:04:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:39.020 03:04:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.020 03:04:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.020 03:04:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:39.020 03:04:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:39.020 03:04:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:39.020 03:04:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:39.020 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.391 03:04:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:40.391 03:04:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:40.391 03:04:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:40.391 03:04:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:40.391 03:04:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:40.650 03:04:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:40.650 03:04:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:40.650 03:04:35 -- nvmf/common.sh@294 -- # net_devs=() 00:16:40.650 03:04:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:40.650 03:04:35 -- nvmf/common.sh@295 -- # e810=() 00:16:40.650 03:04:35 -- nvmf/common.sh@295 -- # local -ga e810 00:16:40.650 03:04:35 -- nvmf/common.sh@296 -- # x722=() 00:16:40.650 03:04:35 -- nvmf/common.sh@296 -- # local -ga x722 00:16:40.650 03:04:35 -- nvmf/common.sh@297 -- # mlx=() 00:16:40.650 03:04:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:40.650 03:04:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.650 03:04:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:40.650 03:04:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:40.650 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:40.650 03:04:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:40.650 03:04:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:40.650 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:40.650 03:04:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:40.650 03:04:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.650 03:04:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.650 03:04:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:40.650 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:40.650 03:04:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:40.650 03:04:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.650 03:04:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.650 03:04:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:40.650 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:40.650 03:04:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:40.650 03:04:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:40.650 03:04:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.650 03:04:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.650 03:04:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:40.650 03:04:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.650 03:04:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.650 03:04:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:40.650 03:04:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.650 03:04:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.650 03:04:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:40.650 03:04:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:40.650 03:04:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.650 03:04:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.650 03:04:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.650 03:04:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.650 03:04:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:40.650 03:04:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.650 03:04:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.650 03:04:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.650 03:04:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:40.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:16:40.650 00:16:40.650 --- 10.0.0.2 ping statistics --- 00:16:40.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.650 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:16:40.650 03:04:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:16:40.650 00:16:40.650 --- 10.0.0.1 ping statistics --- 00:16:40.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.650 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:40.650 03:04:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.650 03:04:35 -- nvmf/common.sh@410 -- # return 0 00:16:40.650 03:04:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:40.650 03:04:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.650 03:04:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:40.650 03:04:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.650 03:04:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:40.650 03:04:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:40.650 03:04:35 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:40.650 03:04:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:40.650 03:04:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:40.650 03:04:35 -- common/autotest_common.sh@10 -- # set +x 00:16:40.650 03:04:35 -- nvmf/common.sh@469 -- # nvmfpid=1990268 00:16:40.650 03:04:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:40.650 03:04:35 -- nvmf/common.sh@470 -- # waitforlisten 1990268 00:16:40.650 03:04:35 -- common/autotest_common.sh@819 -- # '[' -z 1990268 ']' 00:16:40.650 03:04:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.650 03:04:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:40.650 03:04:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.650 03:04:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:40.650 03:04:35 -- common/autotest_common.sh@10 -- # set +x 00:16:40.650 [2024-07-14 03:04:35.877096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:40.650 [2024-07-14 03:04:35.877191] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.908 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.908 [2024-07-14 03:04:35.943878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:40.908 [2024-07-14 03:04:36.031213] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:40.908 [2024-07-14 03:04:36.031365] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.908 [2024-07-14 03:04:36.031383] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.908 [2024-07-14 03:04:36.031395] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.908 [2024-07-14 03:04:36.031457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.908 [2024-07-14 03:04:36.031484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.908 [2024-07-14 03:04:36.031487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.839 03:04:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:41.839 03:04:36 -- common/autotest_common.sh@852 -- # return 0 00:16:41.839 03:04:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:41.839 03:04:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:41.839 03:04:36 -- common/autotest_common.sh@10 -- # set +x 00:16:41.840 03:04:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.840 03:04:36 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:41.840 [2024-07-14 03:04:37.047413] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.840 03:04:37 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:42.097 03:04:37 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:42.097 03:04:37 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:42.355 03:04:37 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:42.355 03:04:37 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:42.613 03:04:37 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:42.871 03:04:38 -- target/nvmf_lvol.sh@29 -- # lvs=d2e8cfa9-87bd-48c7-8ddc-08725e3f8a9a 00:16:42.871 03:04:38 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d2e8cfa9-87bd-48c7-8ddc-08725e3f8a9a lvol 20 00:16:43.128 03:04:38 -- target/nvmf_lvol.sh@32 -- # lvol=5ad212b7-b07f-4e84-9c9b-a171e8788a09 00:16:43.128 03:04:38 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:43.385 03:04:38 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5ad212b7-b07f-4e84-9c9b-a171e8788a09 00:16:43.643 03:04:38 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:43.901 [2024-07-14 03:04:39.032074] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.901 03:04:39 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.158 03:04:39 -- target/nvmf_lvol.sh@42 -- # perf_pid=1990707 00:16:44.158 03:04:39 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:44.158 03:04:39 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:44.158 EAL: No free 2048 kB hugepages reported on node 1 00:16:45.093 03:04:40 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5ad212b7-b07f-4e84-9c9b-a171e8788a09 MY_SNAPSHOT 00:16:45.366 03:04:40 -- target/nvmf_lvol.sh@47 -- # snapshot=e37d300b-d2bc-4aca-be0f-be6d74b2d8dc 00:16:45.366 03:04:40 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5ad212b7-b07f-4e84-9c9b-a171e8788a09 30 00:16:45.672 03:04:40 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e37d300b-d2bc-4aca-be0f-be6d74b2d8dc MY_CLONE 00:16:45.930 03:04:41 -- target/nvmf_lvol.sh@49 -- # clone=7aba4c48-3667-4dbe-ba63-0c31512a23db 00:16:45.930 03:04:41 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7aba4c48-3667-4dbe-ba63-0c31512a23db 00:16:46.497 03:04:41 -- target/nvmf_lvol.sh@53 -- # wait 1990707 00:16:54.617 Initializing NVMe Controllers 00:16:54.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:54.617 Controller IO queue size 128, less than required. 00:16:54.617 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:54.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:54.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:54.617 Initialization complete. Launching workers. 00:16:54.617 ======================================================== 00:16:54.617 Latency(us) 00:16:54.617 Device Information : IOPS MiB/s Average min max 00:16:54.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11132.50 43.49 11503.75 604.62 96896.73 00:16:54.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11085.70 43.30 11553.25 1967.73 64583.42 00:16:54.617 ======================================================== 00:16:54.617 Total : 22218.20 86.79 11528.45 604.62 96896.73 00:16:54.617 00:16:54.617 03:04:49 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:54.875 03:04:49 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5ad212b7-b07f-4e84-9c9b-a171e8788a09 00:16:55.135 03:04:50 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2e8cfa9-87bd-48c7-8ddc-08725e3f8a9a 00:16:55.395 03:04:50 -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:55.395 03:04:50 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:55.395 03:04:50 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:55.395 03:04:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:55.395 03:04:50 -- nvmf/common.sh@116 -- # sync 00:16:55.395 03:04:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:55.395 03:04:50 -- nvmf/common.sh@119 -- # set +e 00:16:55.395 03:04:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:55.395 03:04:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:55.395 rmmod nvme_tcp 00:16:55.395 rmmod nvme_fabrics 00:16:55.395 rmmod nvme_keyring 00:16:55.395 03:04:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:55.395 03:04:50 -- nvmf/common.sh@123 -- # set -e 00:16:55.395 03:04:50 -- nvmf/common.sh@124 -- # return 0 00:16:55.395 03:04:50 -- nvmf/common.sh@477 -- # '[' -n 1990268 ']' 00:16:55.395 03:04:50 -- nvmf/common.sh@478 -- # killprocess 1990268 00:16:55.395 03:04:50 -- common/autotest_common.sh@926 -- # '[' -z 1990268 ']' 00:16:55.395 03:04:50 -- common/autotest_common.sh@930 -- # kill -0 1990268 00:16:55.395 03:04:50 -- common/autotest_common.sh@931 -- # uname 00:16:55.395 03:04:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:55.395 03:04:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1990268 00:16:55.395 03:04:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:55.395 03:04:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:55.395 03:04:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1990268' 00:16:55.395 killing process with pid 1990268 00:16:55.396 03:04:50 -- common/autotest_common.sh@945 -- # kill 1990268 00:16:55.396 03:04:50 -- common/autotest_common.sh@950 -- # wait 1990268 00:16:55.655 03:04:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:55.655 03:04:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:55.655 03:04:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:55.655 03:04:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:55.655 03:04:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:55.655 03:04:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.655 03:04:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:55.655 03:04:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.559 03:04:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:57.559 00:16:57.559 real 0m19.101s 00:16:57.559 user 1m5.521s 00:16:57.559 sys 0m5.407s 00:16:57.559 03:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:57.559 03:04:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.559 ************************************ 00:16:57.559 END TEST nvmf_lvol 00:16:57.559 ************************************ 00:16:57.817 03:04:52 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:57.817 03:04:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:57.817 03:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:57.817 03:04:52 -- common/autotest_common.sh@10 -- # set +x 00:16:57.817 ************************************ 00:16:57.817 START TEST nvmf_lvs_grow 00:16:57.817 ************************************ 00:16:57.817 03:04:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:57.817 * Looking for test storage... 00:16:57.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.817 03:04:52 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.817 03:04:52 -- nvmf/common.sh@7 -- # uname -s 00:16:57.817 03:04:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.817 03:04:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.817 03:04:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.817 03:04:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.817 03:04:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.817 03:04:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.817 03:04:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.817 03:04:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.817 03:04:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.817 03:04:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.817 03:04:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.817 03:04:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.817 03:04:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.817 03:04:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.817 03:04:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.818 03:04:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.818 03:04:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.818 03:04:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.818 03:04:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.818 03:04:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.818 03:04:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.818 03:04:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.818 03:04:52 -- paths/export.sh@5 -- # export PATH 00:16:57.818 03:04:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.818 03:04:52 -- nvmf/common.sh@46 -- # : 0 00:16:57.818 03:04:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:57.818 03:04:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:57.818 03:04:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:57.818 03:04:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.818 03:04:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.818 03:04:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:57.818 03:04:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:57.818 03:04:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:57.818 03:04:52 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:57.818 03:04:52 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:57.818 03:04:52 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:16:57.818 03:04:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:57.818 03:04:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.818 03:04:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:57.818 03:04:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:57.818 03:04:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:57.818 03:04:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.818 03:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.818 03:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.818 03:04:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:57.818 03:04:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:57.818 03:04:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:57.818 03:04:52 -- common/autotest_common.sh@10 -- # set +x 00:16:59.724 03:04:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:59.724 03:04:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:59.724 03:04:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:59.724 03:04:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:59.724 03:04:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:59.724 03:04:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:59.724 03:04:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:59.724 03:04:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:59.724 03:04:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:59.724 03:04:54 -- nvmf/common.sh@295 -- # e810=() 00:16:59.724 03:04:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:59.724 03:04:54 -- nvmf/common.sh@296 -- # x722=() 00:16:59.724 03:04:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:59.724 03:04:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:59.724 03:04:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:59.724 03:04:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.724 03:04:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:59.724 03:04:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:59.724 03:04:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:59.724 03:04:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:59.724 03:04:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:59.724 03:04:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:59.724 03:04:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.724 03:04:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.725 03:04:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:59.725 03:04:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.725 03:04:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:59.725 03:04:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.725 03:04:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.725 03:04:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.725 03:04:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.725 03:04:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.725 03:04:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.725 03:04:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:59.725 03:04:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.725 03:04:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:59.725 03:04:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.725 03:04:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.725 03:04:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.725 03:04:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:59.725 03:04:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:59.725 03:04:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:59.725 03:04:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.725 03:04:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.725 03:04:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.725 03:04:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:59.725 03:04:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.725 03:04:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.725 03:04:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:59.725 03:04:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.725 03:04:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.725 03:04:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:59.725 03:04:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:59.725 03:04:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.725 03:04:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.725 03:04:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:59.725 03:04:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:59.725 03:04:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:59.725 03:04:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:59.725 03:04:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:59.725 03:04:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:59.725 03:04:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:59.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:59.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:16:59.725 00:16:59.725 --- 10.0.0.2 ping statistics --- 00:16:59.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.725 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:16:59.725 03:04:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:59.725 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:59.725 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:16:59.725 00:16:59.725 --- 10.0.0.1 ping statistics --- 00:16:59.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:59.725 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:59.725 03:04:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:59.725 03:04:54 -- nvmf/common.sh@410 -- # return 0 00:16:59.725 03:04:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:59.725 03:04:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:59.725 03:04:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:59.725 03:04:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:59.725 03:04:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:59.725 03:04:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:59.725 03:04:54 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:16:59.725 03:04:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:59.725 03:04:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:59.725 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:16:59.725 03:04:54 -- nvmf/common.sh@469 -- # nvmfpid=1994011 00:16:59.725 03:04:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:59.725 03:04:54 -- nvmf/common.sh@470 -- # waitforlisten 1994011 00:16:59.725 03:04:54 -- common/autotest_common.sh@819 -- # '[' -z 1994011 ']' 00:16:59.725 03:04:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.725 03:04:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:59.725 03:04:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.725 03:04:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:59.725 03:04:54 -- common/autotest_common.sh@10 -- # set +x 00:16:59.984 [2024-07-14 03:04:55.008957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:59.984 [2024-07-14 03:04:55.009027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.984 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.984 [2024-07-14 03:04:55.078593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.984 [2024-07-14 03:04:55.170296] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:59.984 [2024-07-14 03:04:55.170443] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:59.984 [2024-07-14 03:04:55.170461] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:59.984 [2024-07-14 03:04:55.170474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:59.984 [2024-07-14 03:04:55.170502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.920 03:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.920 03:04:55 -- common/autotest_common.sh@852 -- # return 0 00:17:00.920 03:04:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:00.920 03:04:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:00.920 03:04:55 -- common/autotest_common.sh@10 -- # set +x 00:17:00.920 03:04:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.920 03:04:56 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:01.180 [2024-07-14 03:04:56.225857] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:01.180 03:04:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:01.180 03:04:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:01.180 03:04:56 -- common/autotest_common.sh@10 -- # set +x 00:17:01.180 ************************************ 00:17:01.180 START TEST lvs_grow_clean 00:17:01.180 ************************************ 00:17:01.180 03:04:56 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:01.180 03:04:56 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:01.440 03:04:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:01.440 03:04:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:01.699 03:04:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:01.699 03:04:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:01.699 03:04:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:01.957 03:04:57 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:01.957 03:04:57 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:01.957 03:04:57 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b lvol 150 00:17:02.215 03:04:57 -- target/nvmf_lvs_grow.sh@33 -- # lvol=95b416ac-9390-45e0-9868-66efcba61472 00:17:02.215 03:04:57 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:02.215 03:04:57 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:02.215 [2024-07-14 03:04:57.462962] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:02.215 [2024-07-14 03:04:57.463051] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:02.215 true 00:17:02.475 03:04:57 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:02.475 03:04:57 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:02.475 03:04:57 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:02.475 03:04:57 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:02.734 03:04:57 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 95b416ac-9390-45e0-9868-66efcba61472 00:17:02.995 03:04:58 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:03.255 [2024-07-14 03:04:58.417941] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:03.255 03:04:58 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:03.514 03:04:58 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1994466 00:17:03.514 03:04:58 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:03.514 03:04:58 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:03.514 03:04:58 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1994466 /var/tmp/bdevperf.sock 00:17:03.514 03:04:58 -- common/autotest_common.sh@819 -- # '[' -z 1994466 ']' 00:17:03.514 03:04:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.514 03:04:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.514 03:04:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.514 03:04:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.514 03:04:58 -- common/autotest_common.sh@10 -- # set +x 00:17:03.514 [2024-07-14 03:04:58.701565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:03.514 [2024-07-14 03:04:58.701635] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994466 ] 00:17:03.514 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.514 [2024-07-14 03:04:58.765962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.772 [2024-07-14 03:04:58.863103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.707 03:04:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.707 03:04:59 -- common/autotest_common.sh@852 -- # return 0 00:17:04.707 03:04:59 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:04.967 Nvme0n1 00:17:04.967 03:05:00 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:05.268 [ 00:17:05.268 { 00:17:05.268 "name": "Nvme0n1", 00:17:05.268 "aliases": [ 00:17:05.268 "95b416ac-9390-45e0-9868-66efcba61472" 00:17:05.268 ], 00:17:05.268 "product_name": "NVMe disk", 00:17:05.268 "block_size": 4096, 00:17:05.268 "num_blocks": 38912, 00:17:05.268 "uuid": "95b416ac-9390-45e0-9868-66efcba61472", 00:17:05.268 "assigned_rate_limits": { 00:17:05.268 "rw_ios_per_sec": 0, 00:17:05.268 "rw_mbytes_per_sec": 0, 00:17:05.268 "r_mbytes_per_sec": 0, 00:17:05.268 "w_mbytes_per_sec": 0 00:17:05.268 }, 00:17:05.268 "claimed": false, 00:17:05.268 "zoned": false, 00:17:05.268 "supported_io_types": { 00:17:05.268 "read": true, 00:17:05.268 "write": true, 00:17:05.268 "unmap": true, 00:17:05.268 "write_zeroes": true, 00:17:05.268 "flush": true, 00:17:05.268 "reset": true, 00:17:05.268 "compare": true, 00:17:05.268 "compare_and_write": true, 00:17:05.268 "abort": true, 00:17:05.268 "nvme_admin": true, 00:17:05.268 "nvme_io": true 00:17:05.268 }, 00:17:05.268 "driver_specific": { 00:17:05.268 "nvme": [ 00:17:05.268 { 00:17:05.268 "trid": { 00:17:05.268 "trtype": "TCP", 00:17:05.268 "adrfam": "IPv4", 00:17:05.268 "traddr": "10.0.0.2", 00:17:05.268 "trsvcid": "4420", 00:17:05.268 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:05.268 }, 00:17:05.268 "ctrlr_data": { 00:17:05.268 "cntlid": 1, 00:17:05.268 "vendor_id": "0x8086", 00:17:05.268 "model_number": "SPDK bdev Controller", 00:17:05.268 "serial_number": "SPDK0", 00:17:05.268 "firmware_revision": "24.01.1", 00:17:05.268 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:05.268 "oacs": { 00:17:05.268 "security": 0, 00:17:05.268 "format": 0, 00:17:05.268 "firmware": 0, 00:17:05.268 "ns_manage": 0 00:17:05.268 }, 00:17:05.268 "multi_ctrlr": true, 00:17:05.269 "ana_reporting": false 00:17:05.269 }, 00:17:05.269 "vs": { 00:17:05.269 "nvme_version": "1.3" 00:17:05.269 }, 00:17:05.269 "ns_data": { 00:17:05.269 "id": 1, 00:17:05.269 "can_share": true 00:17:05.269 } 00:17:05.269 } 00:17:05.269 ], 00:17:05.269 "mp_policy": "active_passive" 00:17:05.269 } 00:17:05.269 } 00:17:05.269 ] 00:17:05.269 03:05:00 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1994740 00:17:05.269 03:05:00 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:05.269 03:05:00 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:05.528 Running I/O for 10 seconds... 00:17:06.464 Latency(us) 00:17:06.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.464 Nvme0n1 : 1.00 13611.00 53.17 0.00 0.00 0.00 0.00 0.00 00:17:06.464 =================================================================================================================== 00:17:06.464 Total : 13611.00 53.17 0.00 0.00 0.00 0.00 0.00 00:17:06.464 00:17:07.400 03:05:02 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:07.400 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.400 Nvme0n1 : 2.00 13733.50 53.65 0.00 0.00 0.00 0.00 0.00 00:17:07.400 =================================================================================================================== 00:17:07.400 Total : 13733.50 53.65 0.00 0.00 0.00 0.00 0.00 00:17:07.400 00:17:07.658 true 00:17:07.658 03:05:02 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:07.658 03:05:02 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:07.916 03:05:02 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:07.916 03:05:02 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:07.916 03:05:02 -- target/nvmf_lvs_grow.sh@65 -- # wait 1994740 00:17:08.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.491 Nvme0n1 : 3.00 13822.33 53.99 0.00 0.00 0.00 0.00 0.00 00:17:08.491 =================================================================================================================== 00:17:08.491 Total : 13822.33 53.99 0.00 0.00 0.00 0.00 0.00 00:17:08.491 00:17:09.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.428 Nvme0n1 : 4.00 13876.75 54.21 0.00 0.00 0.00 0.00 0.00 00:17:09.428 =================================================================================================================== 00:17:09.428 Total : 13876.75 54.21 0.00 0.00 0.00 0.00 0.00 00:17:09.428 00:17:10.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.361 Nvme0n1 : 5.00 13927.00 54.40 0.00 0.00 0.00 0.00 0.00 00:17:10.361 =================================================================================================================== 00:17:10.361 Total : 13927.00 54.40 0.00 0.00 0.00 0.00 0.00 00:17:10.361 00:17:11.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.298 Nvme0n1 : 6.00 13985.83 54.63 0.00 0.00 0.00 0.00 0.00 00:17:11.298 =================================================================================================================== 00:17:11.298 Total : 13985.83 54.63 0.00 0.00 0.00 0.00 0.00 00:17:11.298 00:17:12.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.675 Nvme0n1 : 7.00 14010.71 54.73 0.00 0.00 0.00 0.00 0.00 00:17:12.675 =================================================================================================================== 00:17:12.675 Total : 14010.71 54.73 0.00 0.00 0.00 0.00 0.00 00:17:12.675 00:17:13.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.613 Nvme0n1 : 8.00 14034.38 54.82 0.00 0.00 0.00 0.00 0.00 00:17:13.613 =================================================================================================================== 00:17:13.613 Total : 14034.38 54.82 0.00 0.00 0.00 0.00 0.00 00:17:13.613 00:17:14.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.551 Nvme0n1 : 9.00 14058.11 54.91 0.00 0.00 0.00 0.00 0.00 00:17:14.551 =================================================================================================================== 00:17:14.551 Total : 14058.11 54.91 0.00 0.00 0.00 0.00 0.00 00:17:14.551 00:17:15.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.489 Nvme0n1 : 10.00 14074.70 54.98 0.00 0.00 0.00 0.00 0.00 00:17:15.489 =================================================================================================================== 00:17:15.489 Total : 14074.70 54.98 0.00 0.00 0.00 0.00 0.00 00:17:15.489 00:17:15.489 00:17:15.489 Latency(us) 00:17:15.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.489 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.489 Nvme0n1 : 10.01 14074.74 54.98 0.00 0.00 9086.57 2281.62 11796.48 00:17:15.490 =================================================================================================================== 00:17:15.490 Total : 14074.74 54.98 0.00 0.00 9086.57 2281.62 11796.48 00:17:15.490 0 00:17:15.490 03:05:10 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1994466 00:17:15.490 03:05:10 -- common/autotest_common.sh@926 -- # '[' -z 1994466 ']' 00:17:15.490 03:05:10 -- common/autotest_common.sh@930 -- # kill -0 1994466 00:17:15.490 03:05:10 -- common/autotest_common.sh@931 -- # uname 00:17:15.490 03:05:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:15.490 03:05:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1994466 00:17:15.490 03:05:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:15.490 03:05:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:15.490 03:05:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1994466' 00:17:15.490 killing process with pid 1994466 00:17:15.490 03:05:10 -- common/autotest_common.sh@945 -- # kill 1994466 00:17:15.490 Received shutdown signal, test time was about 10.000000 seconds 00:17:15.490 00:17:15.490 Latency(us) 00:17:15.490 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:15.490 =================================================================================================================== 00:17:15.490 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:15.490 03:05:10 -- common/autotest_common.sh@950 -- # wait 1994466 00:17:15.748 03:05:10 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:16.006 03:05:11 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:16.006 03:05:11 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:16.263 03:05:11 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:16.263 03:05:11 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:16.263 03:05:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:16.521 [2024-07-14 03:05:11.542566] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:16.521 03:05:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:16.521 03:05:11 -- common/autotest_common.sh@640 -- # local es=0 00:17:16.521 03:05:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:16.521 03:05:11 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.521 03:05:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:16.521 03:05:11 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.521 03:05:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:16.521 03:05:11 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.521 03:05:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:16.521 03:05:11 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:16.521 03:05:11 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:16.521 03:05:11 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:16.781 request: 00:17:16.781 { 00:17:16.781 "uuid": "9c36b2a7-9437-4440-ba5e-0f680ec5ea4b", 00:17:16.781 "method": "bdev_lvol_get_lvstores", 00:17:16.781 "req_id": 1 00:17:16.781 } 00:17:16.781 Got JSON-RPC error response 00:17:16.781 response: 00:17:16.781 { 00:17:16.781 "code": -19, 00:17:16.781 "message": "No such device" 00:17:16.781 } 00:17:16.781 03:05:11 -- common/autotest_common.sh@643 -- # es=1 00:17:16.781 03:05:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:16.781 03:05:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:16.781 03:05:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:16.781 03:05:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:17.039 aio_bdev 00:17:17.039 03:05:12 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 95b416ac-9390-45e0-9868-66efcba61472 00:17:17.039 03:05:12 -- common/autotest_common.sh@887 -- # local bdev_name=95b416ac-9390-45e0-9868-66efcba61472 00:17:17.039 03:05:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:17.039 03:05:12 -- common/autotest_common.sh@889 -- # local i 00:17:17.039 03:05:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:17.039 03:05:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:17.039 03:05:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:17.039 03:05:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 95b416ac-9390-45e0-9868-66efcba61472 -t 2000 00:17:17.297 [ 00:17:17.297 { 00:17:17.297 "name": "95b416ac-9390-45e0-9868-66efcba61472", 00:17:17.297 "aliases": [ 00:17:17.297 "lvs/lvol" 00:17:17.297 ], 00:17:17.297 "product_name": "Logical Volume", 00:17:17.297 "block_size": 4096, 00:17:17.297 "num_blocks": 38912, 00:17:17.297 "uuid": "95b416ac-9390-45e0-9868-66efcba61472", 00:17:17.297 "assigned_rate_limits": { 00:17:17.297 "rw_ios_per_sec": 0, 00:17:17.297 "rw_mbytes_per_sec": 0, 00:17:17.297 "r_mbytes_per_sec": 0, 00:17:17.297 "w_mbytes_per_sec": 0 00:17:17.297 }, 00:17:17.297 "claimed": false, 00:17:17.297 "zoned": false, 00:17:17.297 "supported_io_types": { 00:17:17.297 "read": true, 00:17:17.297 "write": true, 00:17:17.297 "unmap": true, 00:17:17.297 "write_zeroes": true, 00:17:17.297 "flush": false, 00:17:17.297 "reset": true, 00:17:17.297 "compare": false, 00:17:17.297 "compare_and_write": false, 00:17:17.297 "abort": false, 00:17:17.297 "nvme_admin": false, 00:17:17.297 "nvme_io": false 00:17:17.297 }, 00:17:17.297 "driver_specific": { 00:17:17.297 "lvol": { 00:17:17.297 "lvol_store_uuid": "9c36b2a7-9437-4440-ba5e-0f680ec5ea4b", 00:17:17.297 "base_bdev": "aio_bdev", 00:17:17.297 "thin_provision": false, 00:17:17.297 "snapshot": false, 00:17:17.297 "clone": false, 00:17:17.297 "esnap_clone": false 00:17:17.297 } 00:17:17.297 } 00:17:17.297 } 00:17:17.297 ] 00:17:17.297 03:05:12 -- common/autotest_common.sh@895 -- # return 0 00:17:17.297 03:05:12 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:17.297 03:05:12 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:17.555 03:05:12 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:17.555 03:05:12 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:17.555 03:05:12 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:17.814 03:05:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:17.814 03:05:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 95b416ac-9390-45e0-9868-66efcba61472 00:17:18.073 03:05:13 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9c36b2a7-9437-4440-ba5e-0f680ec5ea4b 00:17:18.332 03:05:13 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.590 00:17:18.590 real 0m17.490s 00:17:18.590 user 0m17.113s 00:17:18.590 sys 0m1.892s 00:17:18.590 03:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.590 03:05:13 -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 ************************************ 00:17:18.590 END TEST lvs_grow_clean 00:17:18.590 ************************************ 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:18.590 03:05:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:18.590 03:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:18.590 03:05:13 -- common/autotest_common.sh@10 -- # set +x 00:17:18.590 ************************************ 00:17:18.590 START TEST lvs_grow_dirty 00:17:18.590 ************************************ 00:17:18.590 03:05:13 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:18.590 03:05:13 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.847 03:05:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:18.847 03:05:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:19.106 03:05:14 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:19.106 03:05:14 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:19.106 03:05:14 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:19.363 03:05:14 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:19.363 03:05:14 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:19.363 03:05:14 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 lvol 150 00:17:19.646 03:05:14 -- target/nvmf_lvs_grow.sh@33 -- # lvol=5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:19.646 03:05:14 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:19.646 03:05:14 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:19.906 [2024-07-14 03:05:15.074309] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:19.906 [2024-07-14 03:05:15.074401] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:19.906 true 00:17:19.906 03:05:15 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:19.906 03:05:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:20.162 03:05:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:20.162 03:05:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:20.419 03:05:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:20.677 03:05:15 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:20.936 03:05:16 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:21.196 03:05:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1996707 00:17:21.196 03:05:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:21.196 03:05:16 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:21.196 03:05:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1996707 /var/tmp/bdevperf.sock 00:17:21.196 03:05:16 -- common/autotest_common.sh@819 -- # '[' -z 1996707 ']' 00:17:21.196 03:05:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:21.196 03:05:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.196 03:05:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:21.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:21.196 03:05:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.196 03:05:16 -- common/autotest_common.sh@10 -- # set +x 00:17:21.196 [2024-07-14 03:05:16.307262] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:21.196 [2024-07-14 03:05:16.307353] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996707 ] 00:17:21.196 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.196 [2024-07-14 03:05:16.372334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.456 [2024-07-14 03:05:16.460292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.023 03:05:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.023 03:05:17 -- common/autotest_common.sh@852 -- # return 0 00:17:22.023 03:05:17 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:22.284 Nvme0n1 00:17:22.542 03:05:17 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:22.542 [ 00:17:22.542 { 00:17:22.542 "name": "Nvme0n1", 00:17:22.542 "aliases": [ 00:17:22.542 "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6" 00:17:22.542 ], 00:17:22.542 "product_name": "NVMe disk", 00:17:22.542 "block_size": 4096, 00:17:22.542 "num_blocks": 38912, 00:17:22.542 "uuid": "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6", 00:17:22.542 "assigned_rate_limits": { 00:17:22.542 "rw_ios_per_sec": 0, 00:17:22.542 "rw_mbytes_per_sec": 0, 00:17:22.542 "r_mbytes_per_sec": 0, 00:17:22.542 "w_mbytes_per_sec": 0 00:17:22.542 }, 00:17:22.542 "claimed": false, 00:17:22.542 "zoned": false, 00:17:22.542 "supported_io_types": { 00:17:22.542 "read": true, 00:17:22.542 "write": true, 00:17:22.542 "unmap": true, 00:17:22.542 "write_zeroes": true, 00:17:22.542 "flush": true, 00:17:22.542 "reset": true, 00:17:22.542 "compare": true, 00:17:22.542 "compare_and_write": true, 00:17:22.542 "abort": true, 00:17:22.542 "nvme_admin": true, 00:17:22.542 "nvme_io": true 00:17:22.542 }, 00:17:22.542 "driver_specific": { 00:17:22.542 "nvme": [ 00:17:22.542 { 00:17:22.542 "trid": { 00:17:22.542 "trtype": "TCP", 00:17:22.542 "adrfam": "IPv4", 00:17:22.542 "traddr": "10.0.0.2", 00:17:22.542 "trsvcid": "4420", 00:17:22.542 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:22.542 }, 00:17:22.542 "ctrlr_data": { 00:17:22.542 "cntlid": 1, 00:17:22.542 "vendor_id": "0x8086", 00:17:22.542 "model_number": "SPDK bdev Controller", 00:17:22.542 "serial_number": "SPDK0", 00:17:22.542 "firmware_revision": "24.01.1", 00:17:22.542 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:22.542 "oacs": { 00:17:22.542 "security": 0, 00:17:22.542 "format": 0, 00:17:22.542 "firmware": 0, 00:17:22.542 "ns_manage": 0 00:17:22.542 }, 00:17:22.542 "multi_ctrlr": true, 00:17:22.542 "ana_reporting": false 00:17:22.542 }, 00:17:22.542 "vs": { 00:17:22.542 "nvme_version": "1.3" 00:17:22.542 }, 00:17:22.542 "ns_data": { 00:17:22.542 "id": 1, 00:17:22.542 "can_share": true 00:17:22.542 } 00:17:22.542 } 00:17:22.542 ], 00:17:22.542 "mp_policy": "active_passive" 00:17:22.542 } 00:17:22.542 } 00:17:22.542 ] 00:17:22.542 03:05:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1996851 00:17:22.542 03:05:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:22.542 03:05:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:22.802 Running I/O for 10 seconds... 00:17:23.740 Latency(us) 00:17:23.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.740 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.740 Nvme0n1 : 1.00 14536.00 56.78 0.00 0.00 0.00 0.00 0.00 00:17:23.740 =================================================================================================================== 00:17:23.740 Total : 14536.00 56.78 0.00 0.00 0.00 0.00 0.00 00:17:23.740 00:17:24.677 03:05:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:24.677 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.677 Nvme0n1 : 2.00 14656.00 57.25 0.00 0.00 0.00 0.00 0.00 00:17:24.677 =================================================================================================================== 00:17:24.677 Total : 14656.00 57.25 0.00 0.00 0.00 0.00 0.00 00:17:24.677 00:17:24.935 true 00:17:24.935 03:05:20 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:24.935 03:05:20 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:25.194 03:05:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:25.194 03:05:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:25.194 03:05:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 1996851 00:17:25.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.765 Nvme0n1 : 3.00 14786.67 57.76 0.00 0.00 0.00 0.00 0.00 00:17:25.765 =================================================================================================================== 00:17:25.765 Total : 14786.67 57.76 0.00 0.00 0.00 0.00 0.00 00:17:25.765 00:17:26.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.703 Nvme0n1 : 4.00 14848.00 58.00 0.00 0.00 0.00 0.00 0.00 00:17:26.703 =================================================================================================================== 00:17:26.703 Total : 14848.00 58.00 0.00 0.00 0.00 0.00 0.00 00:17:26.703 00:17:27.641 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.641 Nvme0n1 : 5.00 14899.20 58.20 0.00 0.00 0.00 0.00 0.00 00:17:27.641 =================================================================================================================== 00:17:27.641 Total : 14899.20 58.20 0.00 0.00 0.00 0.00 0.00 00:17:27.641 00:17:29.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.019 Nvme0n1 : 6.00 14945.17 58.38 0.00 0.00 0.00 0.00 0.00 00:17:29.019 =================================================================================================================== 00:17:29.019 Total : 14945.17 58.38 0.00 0.00 0.00 0.00 0.00 00:17:29.019 00:17:29.952 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.952 Nvme0n1 : 7.00 14977.14 58.50 0.00 0.00 0.00 0.00 0.00 00:17:29.952 =================================================================================================================== 00:17:29.952 Total : 14977.14 58.50 0.00 0.00 0.00 0.00 0.00 00:17:29.953 00:17:30.892 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.892 Nvme0n1 : 8.00 15011.12 58.64 0.00 0.00 0.00 0.00 0.00 00:17:30.892 =================================================================================================================== 00:17:30.892 Total : 15011.12 58.64 0.00 0.00 0.00 0.00 0.00 00:17:30.892 00:17:31.830 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.830 Nvme0n1 : 9.00 15040.78 58.75 0.00 0.00 0.00 0.00 0.00 00:17:31.830 =================================================================================================================== 00:17:31.830 Total : 15040.78 58.75 0.00 0.00 0.00 0.00 0.00 00:17:31.830 00:17:32.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.770 Nvme0n1 : 10.00 15066.40 58.85 0.00 0.00 0.00 0.00 0.00 00:17:32.770 =================================================================================================================== 00:17:32.770 Total : 15066.40 58.85 0.00 0.00 0.00 0.00 0.00 00:17:32.770 00:17:32.770 00:17:32.770 Latency(us) 00:17:32.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.770 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.770 Nvme0n1 : 10.01 15064.87 58.85 0.00 0.00 8490.83 2184.53 13204.29 00:17:32.770 =================================================================================================================== 00:17:32.770 Total : 15064.87 58.85 0.00 0.00 8490.83 2184.53 13204.29 00:17:32.770 0 00:17:32.770 03:05:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1996707 00:17:32.770 03:05:27 -- common/autotest_common.sh@926 -- # '[' -z 1996707 ']' 00:17:32.770 03:05:27 -- common/autotest_common.sh@930 -- # kill -0 1996707 00:17:32.770 03:05:27 -- common/autotest_common.sh@931 -- # uname 00:17:32.770 03:05:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:32.770 03:05:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1996707 00:17:32.770 03:05:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:32.770 03:05:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:32.770 03:05:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1996707' 00:17:32.770 killing process with pid 1996707 00:17:32.770 03:05:27 -- common/autotest_common.sh@945 -- # kill 1996707 00:17:32.770 Received shutdown signal, test time was about 10.000000 seconds 00:17:32.770 00:17:32.770 Latency(us) 00:17:32.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.770 =================================================================================================================== 00:17:32.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.770 03:05:27 -- common/autotest_common.sh@950 -- # wait 1996707 00:17:33.030 03:05:28 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:33.288 03:05:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:33.288 03:05:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1994011 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@74 -- # wait 1994011 00:17:33.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1994011 Killed "${NVMF_APP[@]}" "$@" 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:33.548 03:05:28 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:33.548 03:05:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:33.548 03:05:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:33.548 03:05:28 -- common/autotest_common.sh@10 -- # set +x 00:17:33.548 03:05:28 -- nvmf/common.sh@469 -- # nvmfpid=1998214 00:17:33.548 03:05:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:33.548 03:05:28 -- nvmf/common.sh@470 -- # waitforlisten 1998214 00:17:33.548 03:05:28 -- common/autotest_common.sh@819 -- # '[' -z 1998214 ']' 00:17:33.548 03:05:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.548 03:05:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:33.548 03:05:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.548 03:05:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:33.548 03:05:28 -- common/autotest_common.sh@10 -- # set +x 00:17:33.548 [2024-07-14 03:05:28.780097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:33.548 [2024-07-14 03:05:28.780189] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.807 EAL: No free 2048 kB hugepages reported on node 1 00:17:33.807 [2024-07-14 03:05:28.846897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.807 [2024-07-14 03:05:28.930439] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:33.807 [2024-07-14 03:05:28.930577] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.807 [2024-07-14 03:05:28.930593] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.807 [2024-07-14 03:05:28.930604] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.807 [2024-07-14 03:05:28.930629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.778 03:05:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:34.778 03:05:29 -- common/autotest_common.sh@852 -- # return 0 00:17:34.778 03:05:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:34.778 03:05:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:34.778 03:05:29 -- common/autotest_common.sh@10 -- # set +x 00:17:34.778 03:05:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:34.778 03:05:29 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:34.778 [2024-07-14 03:05:29.991267] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:34.778 [2024-07-14 03:05:29.991420] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:34.778 [2024-07-14 03:05:29.991478] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:34.778 03:05:30 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:34.778 03:05:30 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:34.778 03:05:30 -- common/autotest_common.sh@887 -- # local bdev_name=5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:34.778 03:05:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:34.778 03:05:30 -- common/autotest_common.sh@889 -- # local i 00:17:34.778 03:05:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:34.778 03:05:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:34.778 03:05:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:35.040 03:05:30 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 -t 2000 00:17:35.297 [ 00:17:35.298 { 00:17:35.298 "name": "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6", 00:17:35.298 "aliases": [ 00:17:35.298 "lvs/lvol" 00:17:35.298 ], 00:17:35.298 "product_name": "Logical Volume", 00:17:35.298 "block_size": 4096, 00:17:35.298 "num_blocks": 38912, 00:17:35.298 "uuid": "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6", 00:17:35.298 "assigned_rate_limits": { 00:17:35.298 "rw_ios_per_sec": 0, 00:17:35.298 "rw_mbytes_per_sec": 0, 00:17:35.298 "r_mbytes_per_sec": 0, 00:17:35.298 "w_mbytes_per_sec": 0 00:17:35.298 }, 00:17:35.298 "claimed": false, 00:17:35.298 "zoned": false, 00:17:35.298 "supported_io_types": { 00:17:35.298 "read": true, 00:17:35.298 "write": true, 00:17:35.298 "unmap": true, 00:17:35.298 "write_zeroes": true, 00:17:35.298 "flush": false, 00:17:35.298 "reset": true, 00:17:35.298 "compare": false, 00:17:35.298 "compare_and_write": false, 00:17:35.298 "abort": false, 00:17:35.298 "nvme_admin": false, 00:17:35.298 "nvme_io": false 00:17:35.298 }, 00:17:35.298 "driver_specific": { 00:17:35.298 "lvol": { 00:17:35.298 "lvol_store_uuid": "8418fd19-b1ed-41ee-ab2d-3b4d3c12f381", 00:17:35.298 "base_bdev": "aio_bdev", 00:17:35.298 "thin_provision": false, 00:17:35.298 "snapshot": false, 00:17:35.298 "clone": false, 00:17:35.298 "esnap_clone": false 00:17:35.298 } 00:17:35.298 } 00:17:35.298 } 00:17:35.298 ] 00:17:35.298 03:05:30 -- common/autotest_common.sh@895 -- # return 0 00:17:35.298 03:05:30 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:35.298 03:05:30 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:35.555 03:05:30 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:35.555 03:05:30 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:35.555 03:05:30 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:35.815 03:05:31 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:35.815 03:05:31 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:36.075 [2024-07-14 03:05:31.248211] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:36.075 03:05:31 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:36.075 03:05:31 -- common/autotest_common.sh@640 -- # local es=0 00:17:36.075 03:05:31 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:36.075 03:05:31 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.075 03:05:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.075 03:05:31 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.075 03:05:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.075 03:05:31 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.075 03:05:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:36.075 03:05:31 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:36.075 03:05:31 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:36.075 03:05:31 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:36.334 request: 00:17:36.334 { 00:17:36.334 "uuid": "8418fd19-b1ed-41ee-ab2d-3b4d3c12f381", 00:17:36.334 "method": "bdev_lvol_get_lvstores", 00:17:36.334 "req_id": 1 00:17:36.334 } 00:17:36.334 Got JSON-RPC error response 00:17:36.334 response: 00:17:36.334 { 00:17:36.334 "code": -19, 00:17:36.334 "message": "No such device" 00:17:36.334 } 00:17:36.334 03:05:31 -- common/autotest_common.sh@643 -- # es=1 00:17:36.334 03:05:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:36.334 03:05:31 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:36.334 03:05:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:36.334 03:05:31 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:36.593 aio_bdev 00:17:36.593 03:05:31 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:36.593 03:05:31 -- common/autotest_common.sh@887 -- # local bdev_name=5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:36.593 03:05:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:36.593 03:05:31 -- common/autotest_common.sh@889 -- # local i 00:17:36.593 03:05:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:36.593 03:05:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:36.593 03:05:31 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:36.852 03:05:31 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 -t 2000 00:17:37.110 [ 00:17:37.110 { 00:17:37.110 "name": "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6", 00:17:37.110 "aliases": [ 00:17:37.110 "lvs/lvol" 00:17:37.110 ], 00:17:37.110 "product_name": "Logical Volume", 00:17:37.110 "block_size": 4096, 00:17:37.110 "num_blocks": 38912, 00:17:37.110 "uuid": "5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6", 00:17:37.110 "assigned_rate_limits": { 00:17:37.110 "rw_ios_per_sec": 0, 00:17:37.110 "rw_mbytes_per_sec": 0, 00:17:37.110 "r_mbytes_per_sec": 0, 00:17:37.110 "w_mbytes_per_sec": 0 00:17:37.110 }, 00:17:37.110 "claimed": false, 00:17:37.110 "zoned": false, 00:17:37.110 "supported_io_types": { 00:17:37.110 "read": true, 00:17:37.110 "write": true, 00:17:37.110 "unmap": true, 00:17:37.110 "write_zeroes": true, 00:17:37.110 "flush": false, 00:17:37.110 "reset": true, 00:17:37.110 "compare": false, 00:17:37.110 "compare_and_write": false, 00:17:37.110 "abort": false, 00:17:37.110 "nvme_admin": false, 00:17:37.110 "nvme_io": false 00:17:37.110 }, 00:17:37.110 "driver_specific": { 00:17:37.110 "lvol": { 00:17:37.110 "lvol_store_uuid": "8418fd19-b1ed-41ee-ab2d-3b4d3c12f381", 00:17:37.110 "base_bdev": "aio_bdev", 00:17:37.110 "thin_provision": false, 00:17:37.110 "snapshot": false, 00:17:37.110 "clone": false, 00:17:37.110 "esnap_clone": false 00:17:37.110 } 00:17:37.110 } 00:17:37.110 } 00:17:37.110 ] 00:17:37.110 03:05:32 -- common/autotest_common.sh@895 -- # return 0 00:17:37.110 03:05:32 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:37.110 03:05:32 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:37.370 03:05:32 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:37.370 03:05:32 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:37.370 03:05:32 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:37.630 03:05:32 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:37.630 03:05:32 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5f7ae2fe-fac6-4d8b-9375-bbd271dbaba6 00:17:37.890 03:05:32 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8418fd19-b1ed-41ee-ab2d-3b4d3c12f381 00:17:38.153 03:05:33 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:38.153 03:05:33 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:38.412 00:17:38.412 real 0m19.651s 00:17:38.412 user 0m49.135s 00:17:38.412 sys 0m4.912s 00:17:38.413 03:05:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:38.413 03:05:33 -- common/autotest_common.sh@10 -- # set +x 00:17:38.413 ************************************ 00:17:38.413 END TEST lvs_grow_dirty 00:17:38.413 ************************************ 00:17:38.413 03:05:33 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:38.413 03:05:33 -- common/autotest_common.sh@796 -- # type=--id 00:17:38.413 03:05:33 -- common/autotest_common.sh@797 -- # id=0 00:17:38.413 03:05:33 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:38.413 03:05:33 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:38.413 03:05:33 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:38.413 03:05:33 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:38.413 03:05:33 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:38.413 03:05:33 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:38.413 nvmf_trace.0 00:17:38.413 03:05:33 -- common/autotest_common.sh@811 -- # return 0 00:17:38.413 03:05:33 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:38.413 03:05:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.413 03:05:33 -- nvmf/common.sh@116 -- # sync 00:17:38.413 03:05:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:38.413 03:05:33 -- nvmf/common.sh@119 -- # set +e 00:17:38.413 03:05:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.413 03:05:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:38.413 rmmod nvme_tcp 00:17:38.413 rmmod nvme_fabrics 00:17:38.413 rmmod nvme_keyring 00:17:38.413 03:05:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.413 03:05:33 -- nvmf/common.sh@123 -- # set -e 00:17:38.413 03:05:33 -- nvmf/common.sh@124 -- # return 0 00:17:38.413 03:05:33 -- nvmf/common.sh@477 -- # '[' -n 1998214 ']' 00:17:38.413 03:05:33 -- nvmf/common.sh@478 -- # killprocess 1998214 00:17:38.413 03:05:33 -- common/autotest_common.sh@926 -- # '[' -z 1998214 ']' 00:17:38.413 03:05:33 -- common/autotest_common.sh@930 -- # kill -0 1998214 00:17:38.413 03:05:33 -- common/autotest_common.sh@931 -- # uname 00:17:38.413 03:05:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:38.413 03:05:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1998214 00:17:38.413 03:05:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:38.413 03:05:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:38.413 03:05:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1998214' 00:17:38.413 killing process with pid 1998214 00:17:38.413 03:05:33 -- common/autotest_common.sh@945 -- # kill 1998214 00:17:38.413 03:05:33 -- common/autotest_common.sh@950 -- # wait 1998214 00:17:38.671 03:05:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.671 03:05:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:38.671 03:05:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:38.671 03:05:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.671 03:05:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:38.671 03:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.671 03:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.671 03:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.575 03:05:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:40.575 00:17:40.575 real 0m42.959s 00:17:40.575 user 1m12.452s 00:17:40.575 sys 0m8.640s 00:17:40.575 03:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.575 03:05:35 -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 ************************************ 00:17:40.575 END TEST nvmf_lvs_grow 00:17:40.575 ************************************ 00:17:40.575 03:05:35 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:40.575 03:05:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:40.575 03:05:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:40.575 03:05:35 -- common/autotest_common.sh@10 -- # set +x 00:17:40.575 ************************************ 00:17:40.575 START TEST nvmf_bdev_io_wait 00:17:40.575 ************************************ 00:17:40.575 03:05:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:40.834 * Looking for test storage... 00:17:40.834 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.834 03:05:35 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.834 03:05:35 -- nvmf/common.sh@7 -- # uname -s 00:17:40.834 03:05:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.834 03:05:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.834 03:05:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.834 03:05:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.834 03:05:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.834 03:05:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.834 03:05:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.834 03:05:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.834 03:05:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.834 03:05:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.834 03:05:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.834 03:05:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:40.834 03:05:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.834 03:05:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.834 03:05:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.834 03:05:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.834 03:05:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.834 03:05:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.834 03:05:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.834 03:05:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.834 03:05:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.834 03:05:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.834 03:05:35 -- paths/export.sh@5 -- # export PATH 00:17:40.834 03:05:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.834 03:05:35 -- nvmf/common.sh@46 -- # : 0 00:17:40.834 03:05:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:40.834 03:05:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:40.834 03:05:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:40.834 03:05:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.834 03:05:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.834 03:05:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:40.834 03:05:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:40.834 03:05:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:40.834 03:05:35 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.834 03:05:35 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.834 03:05:35 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:40.834 03:05:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:40.834 03:05:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.834 03:05:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:40.834 03:05:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:40.834 03:05:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:40.834 03:05:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.834 03:05:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.834 03:05:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.834 03:05:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:40.834 03:05:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:40.834 03:05:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:40.834 03:05:35 -- common/autotest_common.sh@10 -- # set +x 00:17:42.739 03:05:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.739 03:05:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:42.739 03:05:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:42.739 03:05:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:42.739 03:05:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:42.739 03:05:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:42.739 03:05:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:42.739 03:05:37 -- nvmf/common.sh@294 -- # net_devs=() 00:17:42.739 03:05:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:42.739 03:05:37 -- nvmf/common.sh@295 -- # e810=() 00:17:42.739 03:05:37 -- nvmf/common.sh@295 -- # local -ga e810 00:17:42.739 03:05:37 -- nvmf/common.sh@296 -- # x722=() 00:17:42.739 03:05:37 -- nvmf/common.sh@296 -- # local -ga x722 00:17:42.739 03:05:37 -- nvmf/common.sh@297 -- # mlx=() 00:17:42.739 03:05:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:42.739 03:05:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.739 03:05:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.740 03:05:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.740 03:05:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.740 03:05:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.740 03:05:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.740 03:05:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.740 03:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:42.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:42.740 03:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:42.740 03:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:42.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:42.740 03:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.740 03:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.740 03:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.740 03:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:42.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:42.740 03:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:42.740 03:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.740 03:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.740 03:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:42.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:42.740 03:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:42.740 03:05:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:42.740 03:05:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.740 03:05:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.740 03:05:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:42.740 03:05:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.740 03:05:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.740 03:05:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:42.740 03:05:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.740 03:05:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.740 03:05:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:42.740 03:05:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:42.740 03:05:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.740 03:05:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.740 03:05:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.740 03:05:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.740 03:05:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:42.740 03:05:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.740 03:05:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.740 03:05:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.740 03:05:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:42.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:17:42.740 00:17:42.740 --- 10.0.0.2 ping statistics --- 00:17:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.740 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:17:42.740 03:05:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:17:42.740 00:17:42.740 --- 10.0.0.1 ping statistics --- 00:17:42.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.740 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:17:42.740 03:05:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.740 03:05:37 -- nvmf/common.sh@410 -- # return 0 00:17:42.740 03:05:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:42.740 03:05:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.740 03:05:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:42.740 03:05:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.740 03:05:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:42.740 03:05:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:42.740 03:05:37 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:42.740 03:05:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:42.740 03:05:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:42.740 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:17:42.740 03:05:37 -- nvmf/common.sh@469 -- # nvmfpid=2000763 00:17:42.740 03:05:37 -- nvmf/common.sh@470 -- # waitforlisten 2000763 00:17:42.740 03:05:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:42.740 03:05:37 -- common/autotest_common.sh@819 -- # '[' -z 2000763 ']' 00:17:42.740 03:05:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.740 03:05:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.740 03:05:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.740 03:05:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.740 03:05:37 -- common/autotest_common.sh@10 -- # set +x 00:17:42.740 [2024-07-14 03:05:37.961399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:42.740 [2024-07-14 03:05:37.961469] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.000 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.000 [2024-07-14 03:05:38.029806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:43.000 [2024-07-14 03:05:38.114384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.000 [2024-07-14 03:05:38.114535] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.000 [2024-07-14 03:05:38.114552] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.000 [2024-07-14 03:05:38.114563] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.000 [2024-07-14 03:05:38.114617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.000 [2024-07-14 03:05:38.114673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.000 [2024-07-14 03:05:38.114738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.000 [2024-07-14 03:05:38.114741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:43.000 03:05:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.000 03:05:38 -- common/autotest_common.sh@852 -- # return 0 00:17:43.000 03:05:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:43.000 03:05:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:43.000 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.000 03:05:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.000 03:05:38 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:43.000 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.000 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.000 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.000 03:05:38 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:43.000 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.000 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.260 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.260 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 [2024-07-14 03:05:38.267591] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.260 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.260 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 Malloc0 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:43.260 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.260 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.260 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.260 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.260 03:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:43.260 03:05:38 -- common/autotest_common.sh@10 -- # set +x 00:17:43.260 [2024-07-14 03:05:38.335514] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.260 03:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2000793 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@30 -- # READ_PID=2000794 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # config=() 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2000797 00:17:43.260 03:05:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.260 { 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme$subsystem", 00:17:43.260 "trtype": "$TEST_TRANSPORT", 00:17:43.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.260 "adrfam": "ipv4", 00:17:43.260 "trsvcid": "$NVMF_PORT", 00:17:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.260 "hdgst": ${hdgst:-false}, 00:17:43.260 "ddgst": ${ddgst:-false} 00:17:43.260 }, 00:17:43.260 "method": "bdev_nvme_attach_controller" 00:17:43.260 } 00:17:43.260 EOF 00:17:43.260 )") 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # config=() 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.260 03:05:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2000799 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.260 { 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme$subsystem", 00:17:43.260 "trtype": "$TEST_TRANSPORT", 00:17:43.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.260 "adrfam": "ipv4", 00:17:43.260 "trsvcid": "$NVMF_PORT", 00:17:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.260 "hdgst": ${hdgst:-false}, 00:17:43.260 "ddgst": ${ddgst:-false} 00:17:43.260 }, 00:17:43.260 "method": "bdev_nvme_attach_controller" 00:17:43.260 } 00:17:43.260 EOF 00:17:43.260 )") 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@35 -- # sync 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # config=() 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:43.260 03:05:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # cat 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.260 { 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme$subsystem", 00:17:43.260 "trtype": "$TEST_TRANSPORT", 00:17:43.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.260 "adrfam": "ipv4", 00:17:43.260 "trsvcid": "$NVMF_PORT", 00:17:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.260 "hdgst": ${hdgst:-false}, 00:17:43.260 "ddgst": ${ddgst:-false} 00:17:43.260 }, 00:17:43.260 "method": "bdev_nvme_attach_controller" 00:17:43.260 } 00:17:43.260 EOF 00:17:43.260 )") 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # config=() 00:17:43.260 03:05:38 -- nvmf/common.sh@520 -- # local subsystem config 00:17:43.260 03:05:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # cat 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:43.260 { 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme$subsystem", 00:17:43.260 "trtype": "$TEST_TRANSPORT", 00:17:43.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:43.260 "adrfam": "ipv4", 00:17:43.260 "trsvcid": "$NVMF_PORT", 00:17:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:43.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:43.260 "hdgst": ${hdgst:-false}, 00:17:43.260 "ddgst": ${ddgst:-false} 00:17:43.260 }, 00:17:43.260 "method": "bdev_nvme_attach_controller" 00:17:43.260 } 00:17:43.260 EOF 00:17:43.260 )") 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # cat 00:17:43.260 03:05:38 -- target/bdev_io_wait.sh@37 -- # wait 2000793 00:17:43.260 03:05:38 -- nvmf/common.sh@542 -- # cat 00:17:43.260 03:05:38 -- nvmf/common.sh@544 -- # jq . 00:17:43.260 03:05:38 -- nvmf/common.sh@544 -- # jq . 00:17:43.260 03:05:38 -- nvmf/common.sh@544 -- # jq . 00:17:43.260 03:05:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.260 03:05:38 -- nvmf/common.sh@544 -- # jq . 00:17:43.260 03:05:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme1", 00:17:43.260 "trtype": "tcp", 00:17:43.260 "traddr": "10.0.0.2", 00:17:43.260 "adrfam": "ipv4", 00:17:43.260 "trsvcid": "4420", 00:17:43.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.260 "hdgst": false, 00:17:43.260 "ddgst": false 00:17:43.260 }, 00:17:43.260 "method": "bdev_nvme_attach_controller" 00:17:43.260 }' 00:17:43.260 03:05:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.260 03:05:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.260 "params": { 00:17:43.260 "name": "Nvme1", 00:17:43.260 "trtype": "tcp", 00:17:43.260 "traddr": "10.0.0.2", 00:17:43.260 "adrfam": "ipv4", 00:17:43.261 "trsvcid": "4420", 00:17:43.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.261 "hdgst": false, 00:17:43.261 "ddgst": false 00:17:43.261 }, 00:17:43.261 "method": "bdev_nvme_attach_controller" 00:17:43.261 }' 00:17:43.261 03:05:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.261 03:05:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.261 "params": { 00:17:43.261 "name": "Nvme1", 00:17:43.261 "trtype": "tcp", 00:17:43.261 "traddr": "10.0.0.2", 00:17:43.261 "adrfam": "ipv4", 00:17:43.261 "trsvcid": "4420", 00:17:43.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.261 "hdgst": false, 00:17:43.261 "ddgst": false 00:17:43.261 }, 00:17:43.261 "method": "bdev_nvme_attach_controller" 00:17:43.261 }' 00:17:43.261 03:05:38 -- nvmf/common.sh@545 -- # IFS=, 00:17:43.261 03:05:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:43.261 "params": { 00:17:43.261 "name": "Nvme1", 00:17:43.261 "trtype": "tcp", 00:17:43.261 "traddr": "10.0.0.2", 00:17:43.261 "adrfam": "ipv4", 00:17:43.261 "trsvcid": "4420", 00:17:43.261 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.261 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.261 "hdgst": false, 00:17:43.261 "ddgst": false 00:17:43.261 }, 00:17:43.261 "method": "bdev_nvme_attach_controller" 00:17:43.261 }' 00:17:43.261 [2024-07-14 03:05:38.381418] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:43.261 [2024-07-14 03:05:38.381497] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:43.261 [2024-07-14 03:05:38.381598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:43.261 [2024-07-14 03:05:38.381597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:43.261 [2024-07-14 03:05:38.381597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:43.261 [2024-07-14 03:05:38.381679] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-14 03:05:38.381680] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:43.261 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:43.261 [2024-07-14 03:05:38.381682] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:43.261 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.521 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.521 [2024-07-14 03:05:38.565027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.521 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.521 [2024-07-14 03:05:38.637818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:43.521 [2024-07-14 03:05:38.664538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.521 EAL: No free 2048 kB hugepages reported on node 1 00:17:43.521 [2024-07-14 03:05:38.737831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:43.780 [2024-07-14 03:05:38.789086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.780 [2024-07-14 03:05:38.847494] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.780 [2024-07-14 03:05:38.871568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:43.780 [2024-07-14 03:05:38.918222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:43.780 Running I/O for 1 seconds... 00:17:44.039 Running I/O for 1 seconds... 00:17:44.039 Running I/O for 1 seconds... 00:17:44.039 Running I/O for 1 seconds... 00:17:44.975 00:17:44.975 Latency(us) 00:17:44.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.975 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:44.975 Nvme1n1 : 1.02 5721.23 22.35 0.00 0.00 22118.93 8641.04 34564.17 00:17:44.975 =================================================================================================================== 00:17:44.975 Total : 5721.23 22.35 0.00 0.00 22118.93 8641.04 34564.17 00:17:44.975 00:17:44.975 Latency(us) 00:17:44.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.975 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:44.975 Nvme1n1 : 1.00 192641.97 752.51 0.00 0.00 661.84 251.83 1104.40 00:17:44.975 =================================================================================================================== 00:17:44.975 Total : 192641.97 752.51 0.00 0.00 661.84 251.83 1104.40 00:17:44.975 00:17:44.975 Latency(us) 00:17:44.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.975 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:44.975 Nvme1n1 : 1.01 5574.60 21.78 0.00 0.00 22885.47 6165.24 37476.88 00:17:44.975 =================================================================================================================== 00:17:44.975 Total : 5574.60 21.78 0.00 0.00 22885.47 6165.24 37476.88 00:17:44.975 00:17:44.975 Latency(us) 00:17:44.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.975 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:44.975 Nvme1n1 : 1.01 9327.91 36.44 0.00 0.00 13678.71 4951.61 19612.25 00:17:44.975 =================================================================================================================== 00:17:44.975 Total : 9327.91 36.44 0.00 0.00 13678.71 4951.61 19612.25 00:17:45.235 03:05:40 -- target/bdev_io_wait.sh@38 -- # wait 2000794 00:17:45.235 03:05:40 -- target/bdev_io_wait.sh@39 -- # wait 2000797 00:17:45.235 03:05:40 -- target/bdev_io_wait.sh@40 -- # wait 2000799 00:17:45.235 03:05:40 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:45.235 03:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:45.235 03:05:40 -- common/autotest_common.sh@10 -- # set +x 00:17:45.495 03:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:45.495 03:05:40 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:45.495 03:05:40 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:45.495 03:05:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:45.495 03:05:40 -- nvmf/common.sh@116 -- # sync 00:17:45.495 03:05:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:45.495 03:05:40 -- nvmf/common.sh@119 -- # set +e 00:17:45.495 03:05:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:45.495 03:05:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:45.495 rmmod nvme_tcp 00:17:45.495 rmmod nvme_fabrics 00:17:45.495 rmmod nvme_keyring 00:17:45.495 03:05:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:45.495 03:05:40 -- nvmf/common.sh@123 -- # set -e 00:17:45.495 03:05:40 -- nvmf/common.sh@124 -- # return 0 00:17:45.495 03:05:40 -- nvmf/common.sh@477 -- # '[' -n 2000763 ']' 00:17:45.495 03:05:40 -- nvmf/common.sh@478 -- # killprocess 2000763 00:17:45.495 03:05:40 -- common/autotest_common.sh@926 -- # '[' -z 2000763 ']' 00:17:45.495 03:05:40 -- common/autotest_common.sh@930 -- # kill -0 2000763 00:17:45.495 03:05:40 -- common/autotest_common.sh@931 -- # uname 00:17:45.495 03:05:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:45.495 03:05:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2000763 00:17:45.495 03:05:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:45.495 03:05:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:45.495 03:05:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2000763' 00:17:45.495 killing process with pid 2000763 00:17:45.495 03:05:40 -- common/autotest_common.sh@945 -- # kill 2000763 00:17:45.495 03:05:40 -- common/autotest_common.sh@950 -- # wait 2000763 00:17:45.754 03:05:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:45.754 03:05:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:45.754 03:05:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:45.754 03:05:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:45.754 03:05:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:45.754 03:05:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.754 03:05:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.754 03:05:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.660 03:05:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:47.660 00:17:47.660 real 0m7.013s 00:17:47.660 user 0m15.533s 00:17:47.660 sys 0m3.587s 00:17:47.660 03:05:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.660 03:05:42 -- common/autotest_common.sh@10 -- # set +x 00:17:47.660 ************************************ 00:17:47.660 END TEST nvmf_bdev_io_wait 00:17:47.660 ************************************ 00:17:47.660 03:05:42 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:47.660 03:05:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:47.660 03:05:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:47.660 03:05:42 -- common/autotest_common.sh@10 -- # set +x 00:17:47.660 ************************************ 00:17:47.660 START TEST nvmf_queue_depth 00:17:47.660 ************************************ 00:17:47.660 03:05:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:47.660 * Looking for test storage... 00:17:47.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:47.660 03:05:42 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:47.660 03:05:42 -- nvmf/common.sh@7 -- # uname -s 00:17:47.660 03:05:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:47.660 03:05:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:47.660 03:05:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:47.660 03:05:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:47.660 03:05:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:47.660 03:05:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:47.660 03:05:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:47.660 03:05:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:47.660 03:05:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:47.660 03:05:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:47.918 03:05:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.918 03:05:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:47.918 03:05:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:47.918 03:05:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:47.918 03:05:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:47.918 03:05:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:47.918 03:05:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:47.918 03:05:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:47.918 03:05:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:47.918 03:05:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.918 03:05:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.918 03:05:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.918 03:05:42 -- paths/export.sh@5 -- # export PATH 00:17:47.918 03:05:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:47.918 03:05:42 -- nvmf/common.sh@46 -- # : 0 00:17:47.918 03:05:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:47.918 03:05:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:47.918 03:05:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:47.918 03:05:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:47.918 03:05:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:47.918 03:05:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:47.918 03:05:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:47.918 03:05:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:47.918 03:05:42 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:47.918 03:05:42 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:47.918 03:05:42 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:47.918 03:05:42 -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:47.918 03:05:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:47.918 03:05:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:47.918 03:05:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:47.918 03:05:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:47.918 03:05:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:47.918 03:05:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.918 03:05:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.918 03:05:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:47.918 03:05:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:47.918 03:05:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:47.918 03:05:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:47.918 03:05:42 -- common/autotest_common.sh@10 -- # set +x 00:17:49.850 03:05:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:49.850 03:05:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:49.850 03:05:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:49.850 03:05:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:49.850 03:05:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:49.850 03:05:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:49.850 03:05:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:49.850 03:05:44 -- nvmf/common.sh@294 -- # net_devs=() 00:17:49.850 03:05:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:49.850 03:05:44 -- nvmf/common.sh@295 -- # e810=() 00:17:49.850 03:05:44 -- nvmf/common.sh@295 -- # local -ga e810 00:17:49.850 03:05:44 -- nvmf/common.sh@296 -- # x722=() 00:17:49.850 03:05:44 -- nvmf/common.sh@296 -- # local -ga x722 00:17:49.850 03:05:44 -- nvmf/common.sh@297 -- # mlx=() 00:17:49.850 03:05:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:49.850 03:05:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:49.850 03:05:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:49.850 03:05:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:49.850 03:05:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:49.850 03:05:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:49.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:49.850 03:05:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:49.850 03:05:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:49.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:49.850 03:05:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:49.850 03:05:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.850 03:05:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.850 03:05:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:49.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:49.850 03:05:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.850 03:05:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:49.850 03:05:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:49.850 03:05:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:49.850 03:05:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:49.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:49.850 03:05:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:49.850 03:05:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:49.850 03:05:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:49.850 03:05:44 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:49.850 03:05:44 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:49.850 03:05:44 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:49.850 03:05:44 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:49.850 03:05:44 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:49.850 03:05:44 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:49.850 03:05:44 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:49.850 03:05:44 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:49.850 03:05:44 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:49.850 03:05:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:49.850 03:05:44 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:49.850 03:05:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:49.850 03:05:44 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:49.850 03:05:44 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:49.850 03:05:44 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:49.850 03:05:44 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:49.850 03:05:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:49.850 03:05:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:49.850 03:05:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:49.850 03:05:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:49.850 03:05:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:49.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:49.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:17:49.850 00:17:49.850 --- 10.0.0.2 ping statistics --- 00:17:49.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.850 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:17:49.850 03:05:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:49.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:49.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:17:49.850 00:17:49.850 --- 10.0.0.1 ping statistics --- 00:17:49.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:49.850 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:17:49.850 03:05:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:49.850 03:05:45 -- nvmf/common.sh@410 -- # return 0 00:17:49.850 03:05:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:49.850 03:05:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:49.850 03:05:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:49.850 03:05:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:49.850 03:05:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:49.850 03:05:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:49.850 03:05:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:50.110 03:05:45 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:50.110 03:05:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:50.110 03:05:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:50.110 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:17:50.110 03:05:45 -- nvmf/common.sh@469 -- # nvmfpid=2003038 00:17:50.110 03:05:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:50.110 03:05:45 -- nvmf/common.sh@470 -- # waitforlisten 2003038 00:17:50.110 03:05:45 -- common/autotest_common.sh@819 -- # '[' -z 2003038 ']' 00:17:50.111 03:05:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.111 03:05:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:50.111 03:05:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.111 03:05:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:50.111 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:17:50.111 [2024-07-14 03:05:45.152332] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:50.111 [2024-07-14 03:05:45.152408] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.111 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.111 [2024-07-14 03:05:45.216209] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:50.111 [2024-07-14 03:05:45.300961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:50.111 [2024-07-14 03:05:45.301129] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.111 [2024-07-14 03:05:45.301147] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.111 [2024-07-14 03:05:45.301173] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.111 [2024-07-14 03:05:45.301202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.045 03:05:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:51.045 03:05:46 -- common/autotest_common.sh@852 -- # return 0 00:17:51.045 03:05:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:51.045 03:05:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 03:05:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.045 03:05:46 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.045 03:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 [2024-07-14 03:05:46.120486] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.045 03:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.045 03:05:46 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.045 03:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 Malloc0 00:17:51.045 03:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.045 03:05:46 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.045 03:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 03:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.045 03:05:46 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.045 03:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 03:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.045 03:05:46 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.045 03:05:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 [2024-07-14 03:05:46.179364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.045 03:05:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:51.045 03:05:46 -- target/queue_depth.sh@30 -- # bdevperf_pid=2003195 00:17:51.045 03:05:46 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:51.045 03:05:46 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.045 03:05:46 -- target/queue_depth.sh@33 -- # waitforlisten 2003195 /var/tmp/bdevperf.sock 00:17:51.045 03:05:46 -- common/autotest_common.sh@819 -- # '[' -z 2003195 ']' 00:17:51.045 03:05:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.045 03:05:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:51.045 03:05:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.045 03:05:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:51.045 03:05:46 -- common/autotest_common.sh@10 -- # set +x 00:17:51.045 [2024-07-14 03:05:46.219164] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:51.046 [2024-07-14 03:05:46.219252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003195 ] 00:17:51.046 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.046 [2024-07-14 03:05:46.280005] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.304 [2024-07-14 03:05:46.371028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.239 03:05:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:52.240 03:05:47 -- common/autotest_common.sh@852 -- # return 0 00:17:52.240 03:05:47 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.240 03:05:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:52.240 03:05:47 -- common/autotest_common.sh@10 -- # set +x 00:17:52.240 NVMe0n1 00:17:52.240 03:05:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:52.240 03:05:47 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.240 Running I/O for 10 seconds... 00:18:04.457 00:18:04.457 Latency(us) 00:18:04.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.457 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:04.457 Verification LBA range: start 0x0 length 0x4000 00:18:04.457 NVMe0n1 : 10.07 12314.37 48.10 0.00 0.00 82830.20 14369.37 78449.02 00:18:04.457 =================================================================================================================== 00:18:04.457 Total : 12314.37 48.10 0.00 0.00 82830.20 14369.37 78449.02 00:18:04.457 0 00:18:04.457 03:05:57 -- target/queue_depth.sh@39 -- # killprocess 2003195 00:18:04.457 03:05:57 -- common/autotest_common.sh@926 -- # '[' -z 2003195 ']' 00:18:04.457 03:05:57 -- common/autotest_common.sh@930 -- # kill -0 2003195 00:18:04.457 03:05:57 -- common/autotest_common.sh@931 -- # uname 00:18:04.457 03:05:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:04.457 03:05:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2003195 00:18:04.457 03:05:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:04.457 03:05:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:04.457 03:05:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2003195' 00:18:04.457 killing process with pid 2003195 00:18:04.457 03:05:57 -- common/autotest_common.sh@945 -- # kill 2003195 00:18:04.457 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.457 00:18:04.457 Latency(us) 00:18:04.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.457 =================================================================================================================== 00:18:04.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.457 03:05:57 -- common/autotest_common.sh@950 -- # wait 2003195 00:18:04.457 03:05:57 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:04.457 03:05:57 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:04.457 03:05:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:04.457 03:05:57 -- nvmf/common.sh@116 -- # sync 00:18:04.457 03:05:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:04.457 03:05:57 -- nvmf/common.sh@119 -- # set +e 00:18:04.457 03:05:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:04.457 03:05:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:04.457 rmmod nvme_tcp 00:18:04.457 rmmod nvme_fabrics 00:18:04.457 rmmod nvme_keyring 00:18:04.457 03:05:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:04.457 03:05:57 -- nvmf/common.sh@123 -- # set -e 00:18:04.457 03:05:57 -- nvmf/common.sh@124 -- # return 0 00:18:04.457 03:05:57 -- nvmf/common.sh@477 -- # '[' -n 2003038 ']' 00:18:04.457 03:05:57 -- nvmf/common.sh@478 -- # killprocess 2003038 00:18:04.457 03:05:57 -- common/autotest_common.sh@926 -- # '[' -z 2003038 ']' 00:18:04.457 03:05:57 -- common/autotest_common.sh@930 -- # kill -0 2003038 00:18:04.458 03:05:57 -- common/autotest_common.sh@931 -- # uname 00:18:04.458 03:05:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:04.458 03:05:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2003038 00:18:04.458 03:05:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:04.458 03:05:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:04.458 03:05:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2003038' 00:18:04.458 killing process with pid 2003038 00:18:04.458 03:05:57 -- common/autotest_common.sh@945 -- # kill 2003038 00:18:04.458 03:05:57 -- common/autotest_common.sh@950 -- # wait 2003038 00:18:04.458 03:05:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:04.458 03:05:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:04.458 03:05:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:04.458 03:05:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.458 03:05:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:04.458 03:05:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.458 03:05:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.458 03:05:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.025 03:06:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:05.025 00:18:05.025 real 0m17.374s 00:18:05.025 user 0m24.791s 00:18:05.025 sys 0m3.264s 00:18:05.025 03:06:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.025 03:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:05.025 ************************************ 00:18:05.025 END TEST nvmf_queue_depth 00:18:05.025 ************************************ 00:18:05.025 03:06:00 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.025 03:06:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:05.025 03:06:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:05.025 03:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:05.025 ************************************ 00:18:05.025 START TEST nvmf_multipath 00:18:05.025 ************************************ 00:18:05.025 03:06:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.284 * Looking for test storage... 00:18:05.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.284 03:06:00 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.284 03:06:00 -- nvmf/common.sh@7 -- # uname -s 00:18:05.284 03:06:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.284 03:06:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.284 03:06:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.284 03:06:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.284 03:06:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.284 03:06:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.284 03:06:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.284 03:06:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.284 03:06:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.284 03:06:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.284 03:06:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.284 03:06:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.284 03:06:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.284 03:06:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.284 03:06:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.284 03:06:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.284 03:06:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.284 03:06:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.284 03:06:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.284 03:06:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.284 03:06:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.284 03:06:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.284 03:06:00 -- paths/export.sh@5 -- # export PATH 00:18:05.284 03:06:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.284 03:06:00 -- nvmf/common.sh@46 -- # : 0 00:18:05.284 03:06:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:05.284 03:06:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:05.284 03:06:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:05.284 03:06:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.284 03:06:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.284 03:06:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:05.284 03:06:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:05.284 03:06:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:05.284 03:06:00 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.284 03:06:00 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.284 03:06:00 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:05.284 03:06:00 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.284 03:06:00 -- target/multipath.sh@43 -- # nvmftestinit 00:18:05.284 03:06:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:05.284 03:06:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.284 03:06:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:05.284 03:06:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:05.284 03:06:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:05.284 03:06:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.284 03:06:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.284 03:06:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.284 03:06:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:05.284 03:06:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:05.284 03:06:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:05.284 03:06:00 -- common/autotest_common.sh@10 -- # set +x 00:18:07.191 03:06:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:07.191 03:06:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:07.191 03:06:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:07.191 03:06:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:07.191 03:06:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:07.191 03:06:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:07.191 03:06:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:07.191 03:06:02 -- nvmf/common.sh@294 -- # net_devs=() 00:18:07.191 03:06:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:07.191 03:06:02 -- nvmf/common.sh@295 -- # e810=() 00:18:07.191 03:06:02 -- nvmf/common.sh@295 -- # local -ga e810 00:18:07.191 03:06:02 -- nvmf/common.sh@296 -- # x722=() 00:18:07.191 03:06:02 -- nvmf/common.sh@296 -- # local -ga x722 00:18:07.191 03:06:02 -- nvmf/common.sh@297 -- # mlx=() 00:18:07.191 03:06:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:07.191 03:06:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.191 03:06:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:07.191 03:06:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:07.191 03:06:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:07.191 03:06:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.191 03:06:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:07.191 03:06:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.191 03:06:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:07.191 03:06:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.191 03:06:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.191 03:06:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.191 03:06:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.191 03:06:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:07.191 03:06:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.191 03:06:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.191 03:06:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.191 03:06:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.191 03:06:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:07.191 03:06:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:07.191 03:06:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:07.191 03:06:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.191 03:06:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.191 03:06:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.191 03:06:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:07.191 03:06:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.191 03:06:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.191 03:06:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:07.191 03:06:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.192 03:06:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.192 03:06:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:07.192 03:06:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:07.192 03:06:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.192 03:06:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.192 03:06:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.192 03:06:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.192 03:06:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:07.192 03:06:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.192 03:06:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.192 03:06:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.192 03:06:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:07.192 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.192 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:18:07.192 00:18:07.192 --- 10.0.0.2 ping statistics --- 00:18:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.192 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:07.192 03:06:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.192 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.192 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:18:07.192 00:18:07.192 --- 10.0.0.1 ping statistics --- 00:18:07.192 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.192 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:18:07.192 03:06:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.192 03:06:02 -- nvmf/common.sh@410 -- # return 0 00:18:07.192 03:06:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:07.192 03:06:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.192 03:06:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:07.192 03:06:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:07.192 03:06:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.453 03:06:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:07.453 03:06:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:07.453 03:06:02 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:07.453 03:06:02 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:07.453 only one NIC for nvmf test 00:18:07.453 03:06:02 -- target/multipath.sh@47 -- # nvmftestfini 00:18:07.453 03:06:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:07.453 03:06:02 -- nvmf/common.sh@116 -- # sync 00:18:07.453 03:06:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:07.453 03:06:02 -- nvmf/common.sh@119 -- # set +e 00:18:07.453 03:06:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:07.453 03:06:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:07.453 rmmod nvme_tcp 00:18:07.453 rmmod nvme_fabrics 00:18:07.453 rmmod nvme_keyring 00:18:07.453 03:06:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:07.453 03:06:02 -- nvmf/common.sh@123 -- # set -e 00:18:07.453 03:06:02 -- nvmf/common.sh@124 -- # return 0 00:18:07.453 03:06:02 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:07.453 03:06:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.453 03:06:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.453 03:06:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.453 03:06:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.453 03:06:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.453 03:06:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.453 03:06:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.453 03:06:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.363 03:06:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:09.363 03:06:04 -- target/multipath.sh@48 -- # exit 0 00:18:09.363 03:06:04 -- target/multipath.sh@1 -- # nvmftestfini 00:18:09.363 03:06:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:09.363 03:06:04 -- nvmf/common.sh@116 -- # sync 00:18:09.363 03:06:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:09.363 03:06:04 -- nvmf/common.sh@119 -- # set +e 00:18:09.363 03:06:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:09.363 03:06:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:09.363 03:06:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:09.363 03:06:04 -- nvmf/common.sh@123 -- # set -e 00:18:09.363 03:06:04 -- nvmf/common.sh@124 -- # return 0 00:18:09.363 03:06:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:09.363 03:06:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:09.363 03:06:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:09.363 03:06:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:09.363 03:06:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.363 03:06:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:09.363 03:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.363 03:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.363 03:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.363 03:06:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:09.363 00:18:09.363 real 0m4.339s 00:18:09.363 user 0m0.793s 00:18:09.363 sys 0m1.512s 00:18:09.363 03:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.363 03:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:09.363 ************************************ 00:18:09.363 END TEST nvmf_multipath 00:18:09.363 ************************************ 00:18:09.622 03:06:04 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:09.622 03:06:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:09.622 03:06:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.622 03:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:09.622 ************************************ 00:18:09.622 START TEST nvmf_zcopy 00:18:09.622 ************************************ 00:18:09.622 03:06:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:09.622 * Looking for test storage... 00:18:09.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.622 03:06:04 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.622 03:06:04 -- nvmf/common.sh@7 -- # uname -s 00:18:09.622 03:06:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.622 03:06:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.622 03:06:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.622 03:06:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.622 03:06:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.622 03:06:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.622 03:06:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.622 03:06:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.622 03:06:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.622 03:06:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.622 03:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.622 03:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.622 03:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.622 03:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.622 03:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.622 03:06:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.622 03:06:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.622 03:06:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.622 03:06:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.622 03:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.622 03:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.622 03:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.622 03:06:04 -- paths/export.sh@5 -- # export PATH 00:18:09.622 03:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.622 03:06:04 -- nvmf/common.sh@46 -- # : 0 00:18:09.622 03:06:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.622 03:06:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.622 03:06:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.622 03:06:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.622 03:06:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.622 03:06:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.622 03:06:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.622 03:06:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.622 03:06:04 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:09.622 03:06:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.622 03:06:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.622 03:06:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.622 03:06:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.622 03:06:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.622 03:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.622 03:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.622 03:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.622 03:06:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:09.622 03:06:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:09.622 03:06:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:09.622 03:06:04 -- common/autotest_common.sh@10 -- # set +x 00:18:11.523 03:06:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:11.523 03:06:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:11.523 03:06:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:11.523 03:06:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:11.523 03:06:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:11.523 03:06:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:11.523 03:06:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:11.523 03:06:06 -- nvmf/common.sh@294 -- # net_devs=() 00:18:11.523 03:06:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:11.523 03:06:06 -- nvmf/common.sh@295 -- # e810=() 00:18:11.523 03:06:06 -- nvmf/common.sh@295 -- # local -ga e810 00:18:11.523 03:06:06 -- nvmf/common.sh@296 -- # x722=() 00:18:11.523 03:06:06 -- nvmf/common.sh@296 -- # local -ga x722 00:18:11.523 03:06:06 -- nvmf/common.sh@297 -- # mlx=() 00:18:11.523 03:06:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:11.523 03:06:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.523 03:06:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:11.523 03:06:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:11.523 03:06:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.523 03:06:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.523 03:06:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:11.523 03:06:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.523 03:06:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.523 03:06:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.523 03:06:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.523 03:06:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.523 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.523 03:06:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.523 03:06:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:11.523 03:06:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.523 03:06:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.523 03:06:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.523 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.523 03:06:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.523 03:06:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:11.523 03:06:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:11.523 03:06:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:11.523 03:06:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.523 03:06:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.523 03:06:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.523 03:06:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:11.523 03:06:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.523 03:06:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.523 03:06:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:11.523 03:06:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.523 03:06:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.523 03:06:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:11.523 03:06:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:11.523 03:06:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.523 03:06:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.523 03:06:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.523 03:06:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.523 03:06:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:11.523 03:06:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.781 03:06:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.781 03:06:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.781 03:06:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:11.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:18:11.781 00:18:11.781 --- 10.0.0.2 ping statistics --- 00:18:11.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.781 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:18:11.781 03:06:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:18:11.781 00:18:11.781 --- 10.0.0.1 ping statistics --- 00:18:11.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.781 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:18:11.781 03:06:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.781 03:06:06 -- nvmf/common.sh@410 -- # return 0 00:18:11.781 03:06:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.781 03:06:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.781 03:06:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.781 03:06:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.781 03:06:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.781 03:06:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.781 03:06:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.781 03:06:06 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:11.781 03:06:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.781 03:06:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:11.781 03:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:11.781 03:06:06 -- nvmf/common.sh@469 -- # nvmfpid=2008613 00:18:11.781 03:06:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.781 03:06:06 -- nvmf/common.sh@470 -- # waitforlisten 2008613 00:18:11.781 03:06:06 -- common/autotest_common.sh@819 -- # '[' -z 2008613 ']' 00:18:11.781 03:06:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.781 03:06:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.781 03:06:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.781 03:06:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.781 03:06:06 -- common/autotest_common.sh@10 -- # set +x 00:18:11.781 [2024-07-14 03:06:06.871819] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:11.781 [2024-07-14 03:06:06.871935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.781 [2024-07-14 03:06:06.940704] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.781 [2024-07-14 03:06:07.032460] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.781 [2024-07-14 03:06:07.032618] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.781 [2024-07-14 03:06:07.032649] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.781 [2024-07-14 03:06:07.032664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.781 [2024-07-14 03:06:07.032704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.715 03:06:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.716 03:06:07 -- common/autotest_common.sh@852 -- # return 0 00:18:12.716 03:06:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:12.716 03:06:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 03:06:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.716 03:06:07 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:12.716 03:06:07 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 [2024-07-14 03:06:07.816120] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 [2024-07-14 03:06:07.832316] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 malloc0 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.716 03:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.716 03:06:07 -- common/autotest_common.sh@10 -- # set +x 00:18:12.716 03:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.716 03:06:07 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:12.716 03:06:07 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:12.716 03:06:07 -- nvmf/common.sh@520 -- # config=() 00:18:12.716 03:06:07 -- nvmf/common.sh@520 -- # local subsystem config 00:18:12.716 03:06:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:12.716 03:06:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:12.716 { 00:18:12.716 "params": { 00:18:12.716 "name": "Nvme$subsystem", 00:18:12.716 "trtype": "$TEST_TRANSPORT", 00:18:12.716 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.716 "adrfam": "ipv4", 00:18:12.716 "trsvcid": "$NVMF_PORT", 00:18:12.716 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.716 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.716 "hdgst": ${hdgst:-false}, 00:18:12.716 "ddgst": ${ddgst:-false} 00:18:12.716 }, 00:18:12.716 "method": "bdev_nvme_attach_controller" 00:18:12.716 } 00:18:12.716 EOF 00:18:12.716 )") 00:18:12.716 03:06:07 -- nvmf/common.sh@542 -- # cat 00:18:12.716 03:06:07 -- nvmf/common.sh@544 -- # jq . 00:18:12.716 03:06:07 -- nvmf/common.sh@545 -- # IFS=, 00:18:12.716 03:06:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:12.716 "params": { 00:18:12.716 "name": "Nvme1", 00:18:12.716 "trtype": "tcp", 00:18:12.716 "traddr": "10.0.0.2", 00:18:12.716 "adrfam": "ipv4", 00:18:12.716 "trsvcid": "4420", 00:18:12.716 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.716 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.716 "hdgst": false, 00:18:12.716 "ddgst": false 00:18:12.716 }, 00:18:12.716 "method": "bdev_nvme_attach_controller" 00:18:12.716 }' 00:18:12.716 [2024-07-14 03:06:07.909562] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:12.716 [2024-07-14 03:06:07.909649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008978 ] 00:18:12.716 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.976 [2024-07-14 03:06:07.976354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.976 [2024-07-14 03:06:08.069215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.236 Running I/O for 10 seconds... 00:18:23.241 00:18:23.241 Latency(us) 00:18:23.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.241 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:23.241 Verification LBA range: start 0x0 length 0x1000 00:18:23.241 Nvme1n1 : 10.01 8813.07 68.85 0.00 0.00 14489.86 2342.31 23398.78 00:18:23.241 =================================================================================================================== 00:18:23.241 Total : 8813.07 68.85 0.00 0.00 14489.86 2342.31 23398.78 00:18:23.500 03:06:18 -- target/zcopy.sh@39 -- # perfpid=2010560 00:18:23.500 03:06:18 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:23.500 03:06:18 -- common/autotest_common.sh@10 -- # set +x 00:18:23.500 03:06:18 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:23.500 03:06:18 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:23.500 03:06:18 -- nvmf/common.sh@520 -- # config=() 00:18:23.500 03:06:18 -- nvmf/common.sh@520 -- # local subsystem config 00:18:23.500 03:06:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:23.500 03:06:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:23.500 { 00:18:23.500 "params": { 00:18:23.500 "name": "Nvme$subsystem", 00:18:23.500 "trtype": "$TEST_TRANSPORT", 00:18:23.500 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:23.500 "adrfam": "ipv4", 00:18:23.500 "trsvcid": "$NVMF_PORT", 00:18:23.500 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:23.500 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:23.500 "hdgst": ${hdgst:-false}, 00:18:23.500 "ddgst": ${ddgst:-false} 00:18:23.500 }, 00:18:23.500 "method": "bdev_nvme_attach_controller" 00:18:23.500 } 00:18:23.500 EOF 00:18:23.500 )") 00:18:23.500 03:06:18 -- nvmf/common.sh@542 -- # cat 00:18:23.500 [2024-07-14 03:06:18.507505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.500 [2024-07-14 03:06:18.507554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.500 03:06:18 -- nvmf/common.sh@544 -- # jq . 00:18:23.500 03:06:18 -- nvmf/common.sh@545 -- # IFS=, 00:18:23.500 03:06:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:23.500 "params": { 00:18:23.500 "name": "Nvme1", 00:18:23.500 "trtype": "tcp", 00:18:23.500 "traddr": "10.0.0.2", 00:18:23.500 "adrfam": "ipv4", 00:18:23.500 "trsvcid": "4420", 00:18:23.500 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:23.500 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:23.500 "hdgst": false, 00:18:23.500 "ddgst": false 00:18:23.500 }, 00:18:23.500 "method": "bdev_nvme_attach_controller" 00:18:23.500 }' 00:18:23.501 [2024-07-14 03:06:18.515459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.515486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.523470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.523494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.531481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.531502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.539511] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.539533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.543907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:23.501 [2024-07-14 03:06:18.543981] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2010560 ] 00:18:23.501 [2024-07-14 03:06:18.547522] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.547542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.555544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.555564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.563565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.563585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.571586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.571606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 EAL: No free 2048 kB hugepages reported on node 1 00:18:23.501 [2024-07-14 03:06:18.579626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.579651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.587648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.587672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.595670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.595695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.603692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.603717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.607472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.501 [2024-07-14 03:06:18.611724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.611751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.619780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.619822] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.627765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.627800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.635783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.635809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.643806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.643830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.651825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.651850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.659886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.659935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.667911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.667942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.675897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.675933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.683932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.683953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.691948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.691987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.697659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.501 [2024-07-14 03:06:18.699964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.699985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.707984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.708005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.716024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.716058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.724044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.724085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.732077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.732115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.740093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.740132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.501 [2024-07-14 03:06:18.748115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.501 [2024-07-14 03:06:18.748183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.756154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.756197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.764123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.764146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.772194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.772252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.780236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.780278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.788201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.788239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.796271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.796297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.804280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.804310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.812295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.812322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.820305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.820332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.828335] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.828362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.836353] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.836380] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.844379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.844406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.852397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.852424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.860420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.860446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.868447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.868477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 Running I/O for 5 seconds... 00:18:23.761 [2024-07-14 03:06:18.876467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.876494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.888937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.888965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.898529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.898556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.909677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.909706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.919955] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.919983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.931227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.931256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.941824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.941852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.952484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.952511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.964630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.964658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.973756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.973785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.985093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.985127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:18.995732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:18.995763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.761 [2024-07-14 03:06:19.006776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.761 [2024-07-14 03:06:19.006803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.017301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.017330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.028193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.028221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.040337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.040365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.050842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.050889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.060249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.060275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.071223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.071252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.081426] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.081452] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.020 [2024-07-14 03:06:19.091681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.020 [2024-07-14 03:06:19.091709] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.102524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.102551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.112776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.112802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.122962] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.122988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.133310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.133338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.143287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.143321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.153314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.153341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.163232] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.163259] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.173161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.173188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.183152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.183180] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.193081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.193108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.203467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.203493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.213194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.213221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.224172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.224200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.233305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.233334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.245396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.245424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.254158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.254195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.021 [2024-07-14 03:06:19.266958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.021 [2024-07-14 03:06:19.266985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.276640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.276667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.287038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.287066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.297610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.297637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.309545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.309572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.318553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.318580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.329586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.329613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.339562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.339595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.349836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.349886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.359916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.359954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.370554] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.370581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.383213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.383239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.393121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.393147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.403724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.403751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.413139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.413166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.423786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.423812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.435376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.435404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.444100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.444127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.455062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.455088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.465548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.465574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.477454] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.477479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.486408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.486434] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.497420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.497446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.509149] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.281 [2024-07-14 03:06:19.509175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.281 [2024-07-14 03:06:19.517881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.282 [2024-07-14 03:06:19.517908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.282 [2024-07-14 03:06:19.529118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.282 [2024-07-14 03:06:19.529144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.542 [2024-07-14 03:06:19.540815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.542 [2024-07-14 03:06:19.540864] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.549747] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.549775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.560423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.560451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.572375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.572402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.583461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.583487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.592204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.592231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.602544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.602572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.611380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.611407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.621650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.621677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.631452] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.631478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.642024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.642051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.652442] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.652469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.664038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.664064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.673085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.673112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.684132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.684160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.696128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.696169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.705324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.705352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.716539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.716566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.726636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.726663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.737338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.737372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.749506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.749533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.760433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.760460] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.768904] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.768931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.779819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.779845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.543 [2024-07-14 03:06:19.792126] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.543 [2024-07-14 03:06:19.792168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.801447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.801474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.812266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.812293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.822291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.822317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.832411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.832437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.842592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.842618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.853238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.853265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.863476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.863502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.873492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.873518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.883473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.883499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.893156] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.893182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.903897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.903925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.914082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.914109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.924390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.924417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.934603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.934630] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.944800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.944827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.955507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.955533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.965419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.965445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.975931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.975958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.986584] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.986610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:19.996389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:19.996416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:20.006429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:20.006461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:20.016805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:20.016837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:20.027076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:20.027105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:20.037312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:20.037339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.802 [2024-07-14 03:06:20.047931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.802 [2024-07-14 03:06:20.047959] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.060587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.060615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.069749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.069775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.082738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.082765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.092995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.093023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.103163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.103205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.113886] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.113914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.124258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.124285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.136504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.136531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.145489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.145516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.156516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.156544] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.166643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.166671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.177229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.177256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.187567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.187594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.197831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.197858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.208123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.208167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.218615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.218643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.230498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.230525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.239443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.239471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.249614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.249641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.259571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.259598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.270183] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.270225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.282088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.282116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.291479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.291506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.302749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.302776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.062 [2024-07-14 03:06:20.315103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.062 [2024-07-14 03:06:20.315132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.324058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.324086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.335567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.335596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.345328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.345355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.355902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.355940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.366209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.366237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.376137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.376166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.386652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.386681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.397099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.397137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.407594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.407622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.419939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.419982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.430769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.430798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.439611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.439656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.451267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.451295] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.461772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.461800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.472055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.472083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.482366] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.482394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.492295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.492333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.502683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.502710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.515257] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.515299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.524182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.524210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.536434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.536462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.545565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.545592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.556057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.556084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.565758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.565785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.323 [2024-07-14 03:06:20.575959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.323 [2024-07-14 03:06:20.575986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.585227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.585270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.596033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.596061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.606097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.606124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.616235] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.616263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.626174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.626217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.636346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.636373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.646474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.646501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.656373] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.656400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.667071] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.667098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.677331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.677358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.687424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.687451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.697080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.697107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.707774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.707802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.719590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.719624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.728836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.728873] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.739727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.739770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.750424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.750451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.760172] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.760199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.770474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.770501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.782824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.782852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.791768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.791795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.802478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.802506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.812908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.812937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.823221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.823248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.582 [2024-07-14 03:06:20.833751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.582 [2024-07-14 03:06:20.833778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.844088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.844118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.853781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.853809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.864327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.864354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.876139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.876183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.885113] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.885140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.895970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.895998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.905779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.905806] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.915797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.915830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.925295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.925322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.935543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.935571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.948575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.948602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.958084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.958111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.968177] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.968204] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.978300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.978327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.988450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.988476] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:20.998808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:20.998836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.008277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.008305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.018720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.018748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.028594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.028621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.038277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.038306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.048483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.048511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.058587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.058614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.068109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.068137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.078699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.078727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.842 [2024-07-14 03:06:21.090737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.842 [2024-07-14 03:06:21.090765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.099385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.099415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.110358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.110392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.120109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.120137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.130480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.130509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.142613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.142641] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.151899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.151926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.163060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.163089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.172920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.172962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.183359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.183386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.193249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.193276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.203496] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.203523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.212666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.212693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.224720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.224747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.236127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.236154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.244557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.244584] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.257195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.257222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.266519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.266546] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.277100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.277128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.289423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.289450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.298389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.298417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.310820] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.310854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.320209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.320237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.330945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.330975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.341368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.341396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.101 [2024-07-14 03:06:21.351728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.101 [2024-07-14 03:06:21.351756] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.362673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.362701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.373079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.373107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.383952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.383981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.394106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.394134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.404570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.404599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.416703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.416730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.425431] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.425458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.436631] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.436658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.446710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.446737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.456652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.456679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.466610] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.466637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.477311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.477342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.489822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.489863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.499400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.499427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.510208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.510236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.520514] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.520542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.530299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.530328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.540543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.540571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.550652] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.550680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.561530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.561558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.573518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.573545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.582915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.582943] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.593538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.593565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.360 [2024-07-14 03:06:21.604078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.360 [2024-07-14 03:06:21.604106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.614405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.614432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.624737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.624765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.634609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.634636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.644653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.644680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.655388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.655415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.667408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.667435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.677064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.677092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.687572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.687599] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.697090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.697118] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.707774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.707801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.717337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.717365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.727979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.728007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.737617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.737644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.748320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.748347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.760473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.760500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.769583] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.769610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.782621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.782664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.792751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.792778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.803062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.803090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.812606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.812633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.823411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.823453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.833654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.833681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.843824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.843851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.854338] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.854366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.619 [2024-07-14 03:06:21.866548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.619 [2024-07-14 03:06:21.866575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.875780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.875809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.886648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.886676] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.898712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.898739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.907648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.907675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.920260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.920289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.929210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.929239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.939986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.940014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.949974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.950002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.960859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.960922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.971082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.971110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.981728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.981755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:21.991918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:21.991947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.002055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.002083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.014223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.014252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.023111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.023140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.034095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.034123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.044102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.044140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.055272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.055299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.064833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.064860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.075551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.075578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.085916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.085944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.096104] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.096132] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.107826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.107853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.116757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.116784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.880 [2024-07-14 03:06:22.127458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.880 [2024-07-14 03:06:22.127485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.137946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.137974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.148304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.148331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.158559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.158586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.168942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.168970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.178995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.179023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.189141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.189169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.199244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.199272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.209979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.210007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.220204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.220232] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.232236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.232263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.241395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.241423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.252222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.252249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.262585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.262612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.273145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.273173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.283894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.283922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.293895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.293930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.304358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.304385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.314766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.314794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.325440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.325469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.335696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.335724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.347661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.347688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.356645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.356672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.367752] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.367779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.380023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.380051] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.141 [2024-07-14 03:06:22.388789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.141 [2024-07-14 03:06:22.388816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.401158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.401202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.410862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.410896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.419905] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.419934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.430909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.430937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.443004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.443032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.452075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.452104] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.463260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.463288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.475847] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.475896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.485270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.485297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.496239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.496274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.506416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.506443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.516805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.516832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.529202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.529230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.537936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.537966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.550729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.550757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.560440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.560467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.571193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.571236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.583280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.583308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.592100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.592128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.605420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.605449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.615642] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.615669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.626282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.626309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.636329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.636357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.401 [2024-07-14 03:06:22.646428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.401 [2024-07-14 03:06:22.646465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.657223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.657251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.667530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.667559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.680333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.680361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.689737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.689765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.700146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.700197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.710368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.710411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.720693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.720722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.731122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.731150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.741349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.741377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.751669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.751697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.763781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.763809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.773209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.773237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.784601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.784628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.796673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.796700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.806085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.806113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.816725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.816753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.826954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.826982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.836844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.836897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.847285] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.847311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.857307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.857334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.868233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.868260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.878829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.878879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.889247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.889274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.901667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.901703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.660 [2024-07-14 03:06:22.910527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.660 [2024-07-14 03:06:22.910554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.921438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.921466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.931753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.931780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.942436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.942478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.952487] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.952529] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.962964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.962992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.973179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.973222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.983121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.983149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:22.993422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:22.993449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.003339] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.003366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.013414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.013441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.023853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.023890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.034198] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.034226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.044675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.044702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.054845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.054898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.065236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.065264] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.075306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.075334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.085707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.085736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.095533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.095561] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.105598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.105626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.116344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.116372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.129107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.129137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.139053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.139082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.149206] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.149244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.919 [2024-07-14 03:06:23.159427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.919 [2024-07-14 03:06:23.159455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.173467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.173497] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.182780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.182807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.194321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.194349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.205209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.205236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.215413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.215439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.225585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.225613] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.236242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.236270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.248299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.248326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.257471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.257499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.267856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.267900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.279432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.279459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.288306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.288333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.299004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.299032] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.309241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.309268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.319168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.319195] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.328950] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.328977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.339279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.339306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.351653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.351680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.360417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.360444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.373297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.373325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.383048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.383076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.394505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.394533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.406580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.406609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.415590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.415618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.178 [2024-07-14 03:06:23.428045] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.178 [2024-07-14 03:06:23.428073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.437356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.437400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.448268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.448296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.458380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.458407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.468776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.468804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.479182] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.479210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.489808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.489851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.500018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.500046] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.510545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.510572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.522711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.522739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.531650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.531677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.542969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.542996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.553694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.553722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.563990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.564017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.574508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.574536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.584944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.584973] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.597112] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.597141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.606211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.606240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.616698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.616727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.626862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.626898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.636536] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.636564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.646873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.646901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.656717] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.656745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.666913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.666942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.676998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.677026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.437 [2024-07-14 03:06:23.687224] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.437 [2024-07-14 03:06:23.687251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.697410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.697438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.707577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.707606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.717450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.717478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.728089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.728116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.738580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.738608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.748468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.748495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.758833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.758860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.768654] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.768681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.778679] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.778718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.789626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.789654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.799296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.799326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.809438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.809467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.819100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.819128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.829667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.829695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.841992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.842020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.852639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.852669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.861819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.861856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.873005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.873034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.885334] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.885369] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.894084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.894112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 00:18:28.697 Latency(us) 00:18:28.697 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.697 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:28.697 Nvme1n1 : 5.01 12379.73 96.72 0.00 0.00 10326.57 4004.98 21651.15 00:18:28.697 =================================================================================================================== 00:18:28.697 Total : 12379.73 96.72 0.00 0.00 10326.57 4004.98 21651.15 00:18:28.697 [2024-07-14 03:06:23.901615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.901643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.909639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.909669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.917723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.917786] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.925748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.925805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.933757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.933812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.941787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.941843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.697 [2024-07-14 03:06:23.949795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.697 [2024-07-14 03:06:23.949848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.957835] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.957898] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.965859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.965921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.973882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.973936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.981940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.981998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.989924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.989981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:23.997954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:23.998012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.005996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.006049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.013991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.014058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.022010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.022062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.030030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.030080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.038025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.038060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.046027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.046054] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.054097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.054151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.062123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.062181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.070133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.070194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.078117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.078167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.086164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.086197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.094231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.094283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.102237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.102287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.110228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.110256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.118240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.118267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 [2024-07-14 03:06:24.126258] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.955 [2024-07-14 03:06:24.126285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2010560) - No such process 00:18:28.955 03:06:24 -- target/zcopy.sh@49 -- # wait 2010560 00:18:28.955 03:06:24 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.955 03:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.955 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:18:28.955 03:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.955 03:06:24 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:28.955 03:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.955 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:18:28.955 delay0 00:18:28.955 03:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.955 03:06:24 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:28.955 03:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:28.955 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:18:28.955 03:06:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:28.955 03:06:24 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:28.955 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.955 [2024-07-14 03:06:24.203021] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:35.523 Initializing NVMe Controllers 00:18:35.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.523 Initialization complete. Launching workers. 00:18:35.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 88 00:18:35.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 375, failed to submit 33 00:18:35.523 success 166, unsuccess 209, failed 0 00:18:35.523 03:06:30 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:35.523 03:06:30 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:35.523 03:06:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:35.523 03:06:30 -- nvmf/common.sh@116 -- # sync 00:18:35.523 03:06:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:35.523 03:06:30 -- nvmf/common.sh@119 -- # set +e 00:18:35.523 03:06:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:35.523 03:06:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:35.523 rmmod nvme_tcp 00:18:35.523 rmmod nvme_fabrics 00:18:35.523 rmmod nvme_keyring 00:18:35.523 03:06:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:35.523 03:06:30 -- nvmf/common.sh@123 -- # set -e 00:18:35.523 03:06:30 -- nvmf/common.sh@124 -- # return 0 00:18:35.523 03:06:30 -- nvmf/common.sh@477 -- # '[' -n 2008613 ']' 00:18:35.523 03:06:30 -- nvmf/common.sh@478 -- # killprocess 2008613 00:18:35.523 03:06:30 -- common/autotest_common.sh@926 -- # '[' -z 2008613 ']' 00:18:35.523 03:06:30 -- common/autotest_common.sh@930 -- # kill -0 2008613 00:18:35.523 03:06:30 -- common/autotest_common.sh@931 -- # uname 00:18:35.523 03:06:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:35.523 03:06:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2008613 00:18:35.523 03:06:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:35.523 03:06:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:35.523 03:06:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2008613' 00:18:35.523 killing process with pid 2008613 00:18:35.523 03:06:30 -- common/autotest_common.sh@945 -- # kill 2008613 00:18:35.523 03:06:30 -- common/autotest_common.sh@950 -- # wait 2008613 00:18:35.781 03:06:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:35.781 03:06:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:35.781 03:06:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:35.781 03:06:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.781 03:06:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:35.781 03:06:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.781 03:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.781 03:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.693 03:06:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:37.693 00:18:37.693 real 0m28.216s 00:18:37.693 user 0m41.411s 00:18:37.693 sys 0m8.311s 00:18:37.693 03:06:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.693 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:18:37.693 ************************************ 00:18:37.693 END TEST nvmf_zcopy 00:18:37.693 ************************************ 00:18:37.693 03:06:32 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.693 03:06:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:37.693 03:06:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:37.693 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:18:37.693 ************************************ 00:18:37.693 START TEST nvmf_nmic 00:18:37.693 ************************************ 00:18:37.693 03:06:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.693 * Looking for test storage... 00:18:37.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.693 03:06:32 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.693 03:06:32 -- nvmf/common.sh@7 -- # uname -s 00:18:37.693 03:06:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.693 03:06:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.693 03:06:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.693 03:06:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.693 03:06:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.693 03:06:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.693 03:06:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.693 03:06:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.693 03:06:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.693 03:06:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.693 03:06:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.693 03:06:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.693 03:06:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.693 03:06:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.693 03:06:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.693 03:06:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.693 03:06:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.693 03:06:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.693 03:06:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.693 03:06:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.693 03:06:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.693 03:06:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.693 03:06:32 -- paths/export.sh@5 -- # export PATH 00:18:37.693 03:06:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.693 03:06:32 -- nvmf/common.sh@46 -- # : 0 00:18:37.693 03:06:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:37.693 03:06:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:37.693 03:06:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:37.693 03:06:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.693 03:06:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.693 03:06:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:37.693 03:06:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:37.693 03:06:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:37.694 03:06:32 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.694 03:06:32 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.694 03:06:32 -- target/nmic.sh@14 -- # nvmftestinit 00:18:37.694 03:06:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:37.694 03:06:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.694 03:06:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:37.694 03:06:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:37.694 03:06:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:37.694 03:06:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.694 03:06:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.694 03:06:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.694 03:06:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:37.694 03:06:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:37.694 03:06:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:37.694 03:06:32 -- common/autotest_common.sh@10 -- # set +x 00:18:39.598 03:06:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:39.598 03:06:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:39.598 03:06:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:39.598 03:06:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:39.598 03:06:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:39.598 03:06:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:39.598 03:06:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:39.598 03:06:34 -- nvmf/common.sh@294 -- # net_devs=() 00:18:39.598 03:06:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:39.598 03:06:34 -- nvmf/common.sh@295 -- # e810=() 00:18:39.598 03:06:34 -- nvmf/common.sh@295 -- # local -ga e810 00:18:39.598 03:06:34 -- nvmf/common.sh@296 -- # x722=() 00:18:39.598 03:06:34 -- nvmf/common.sh@296 -- # local -ga x722 00:18:39.598 03:06:34 -- nvmf/common.sh@297 -- # mlx=() 00:18:39.598 03:06:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:39.598 03:06:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.598 03:06:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.599 03:06:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.599 03:06:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.599 03:06:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.599 03:06:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.599 03:06:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:39.599 03:06:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:39.599 03:06:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.599 03:06:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.599 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.599 03:06:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:39.599 03:06:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.599 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.599 03:06:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.599 03:06:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.599 03:06:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.599 03:06:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.599 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.599 03:06:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.599 03:06:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:39.599 03:06:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.599 03:06:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.599 03:06:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.599 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.599 03:06:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.599 03:06:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:39.599 03:06:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:39.599 03:06:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:39.599 03:06:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.599 03:06:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.599 03:06:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.599 03:06:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:39.599 03:06:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.599 03:06:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.599 03:06:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:39.599 03:06:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.599 03:06:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.599 03:06:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:39.599 03:06:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:39.599 03:06:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.599 03:06:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.599 03:06:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.599 03:06:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.599 03:06:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:39.599 03:06:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.858 03:06:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.858 03:06:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.858 03:06:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:39.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:18:39.858 00:18:39.858 --- 10.0.0.2 ping statistics --- 00:18:39.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.858 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:39.858 03:06:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:18:39.858 00:18:39.858 --- 10.0.0.1 ping statistics --- 00:18:39.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.858 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:18:39.858 03:06:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.858 03:06:34 -- nvmf/common.sh@410 -- # return 0 00:18:39.858 03:06:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:39.858 03:06:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.858 03:06:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:39.858 03:06:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:39.858 03:06:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.858 03:06:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:39.858 03:06:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:39.858 03:06:34 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:39.858 03:06:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:39.858 03:06:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:39.858 03:06:34 -- common/autotest_common.sh@10 -- # set +x 00:18:39.858 03:06:34 -- nvmf/common.sh@469 -- # nvmfpid=2013869 00:18:39.858 03:06:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.858 03:06:34 -- nvmf/common.sh@470 -- # waitforlisten 2013869 00:18:39.858 03:06:34 -- common/autotest_common.sh@819 -- # '[' -z 2013869 ']' 00:18:39.858 03:06:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.858 03:06:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:39.858 03:06:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.858 03:06:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:39.858 03:06:34 -- common/autotest_common.sh@10 -- # set +x 00:18:39.858 [2024-07-14 03:06:34.946612] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:39.858 [2024-07-14 03:06:34.946690] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.858 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.858 [2024-07-14 03:06:35.015490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.858 [2024-07-14 03:06:35.106128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:39.858 [2024-07-14 03:06:35.106304] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.858 [2024-07-14 03:06:35.106323] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.858 [2024-07-14 03:06:35.106338] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.858 [2024-07-14 03:06:35.106397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.858 [2024-07-14 03:06:35.106448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.858 [2024-07-14 03:06:35.106511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.858 [2024-07-14 03:06:35.106513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.796 03:06:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:40.796 03:06:35 -- common/autotest_common.sh@852 -- # return 0 00:18:40.796 03:06:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:40.796 03:06:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 03:06:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.796 03:06:35 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 [2024-07-14 03:06:35.909400] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 Malloc0 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 [2024-07-14 03:06:35.962753] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:40.796 test case1: single bdev can't be used in multiple subsystems 00:18:40.796 03:06:35 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@28 -- # nmic_status=0 00:18:40.796 03:06:35 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 [2024-07-14 03:06:35.986629] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:40.796 [2024-07-14 03:06:35.986658] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:40.796 [2024-07-14 03:06:35.986688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:40.796 request: 00:18:40.796 { 00:18:40.796 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:40.796 "namespace": { 00:18:40.796 "bdev_name": "Malloc0" 00:18:40.796 }, 00:18:40.796 "method": "nvmf_subsystem_add_ns", 00:18:40.796 "req_id": 1 00:18:40.796 } 00:18:40.796 Got JSON-RPC error response 00:18:40.796 response: 00:18:40.796 { 00:18:40.796 "code": -32602, 00:18:40.796 "message": "Invalid parameters" 00:18:40.796 } 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@29 -- # nmic_status=1 00:18:40.796 03:06:35 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:40.796 03:06:35 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:40.796 Adding namespace failed - expected result. 00:18:40.796 03:06:35 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:40.796 test case2: host connect to nvmf target in multiple paths 00:18:40.796 03:06:35 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:40.796 03:06:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:40.796 03:06:35 -- common/autotest_common.sh@10 -- # set +x 00:18:40.796 [2024-07-14 03:06:35.994731] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:40.796 03:06:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:40.796 03:06:35 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:41.734 03:06:36 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:42.302 03:06:37 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:42.302 03:06:37 -- common/autotest_common.sh@1177 -- # local i=0 00:18:42.302 03:06:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:42.302 03:06:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:42.302 03:06:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:44.207 03:06:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:44.207 03:06:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:44.207 03:06:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:44.207 03:06:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:44.207 03:06:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:44.207 03:06:39 -- common/autotest_common.sh@1187 -- # return 0 00:18:44.207 03:06:39 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:44.207 [global] 00:18:44.207 thread=1 00:18:44.207 invalidate=1 00:18:44.207 rw=write 00:18:44.207 time_based=1 00:18:44.207 runtime=1 00:18:44.207 ioengine=libaio 00:18:44.207 direct=1 00:18:44.207 bs=4096 00:18:44.207 iodepth=1 00:18:44.207 norandommap=0 00:18:44.207 numjobs=1 00:18:44.207 00:18:44.207 verify_dump=1 00:18:44.207 verify_backlog=512 00:18:44.207 verify_state_save=0 00:18:44.207 do_verify=1 00:18:44.207 verify=crc32c-intel 00:18:44.207 [job0] 00:18:44.207 filename=/dev/nvme0n1 00:18:44.207 Could not set queue depth (nvme0n1) 00:18:44.466 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:44.466 fio-3.35 00:18:44.466 Starting 1 thread 00:18:45.845 00:18:45.845 job0: (groupid=0, jobs=1): err= 0: pid=2014532: Sun Jul 14 03:06:40 2024 00:18:45.845 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:45.845 slat (nsec): min=5481, max=48709, avg=10448.69, stdev=5411.04 00:18:45.845 clat (usec): min=301, max=4049, avg=357.00, stdev=130.84 00:18:45.845 lat (usec): min=307, max=4066, avg=367.45, stdev=132.21 00:18:45.845 clat percentiles (usec): 00:18:45.845 | 1.00th=[ 310], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 326], 00:18:45.845 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 343], 60.00th=[ 351], 00:18:45.845 | 70.00th=[ 359], 80.00th=[ 371], 90.00th=[ 396], 95.00th=[ 412], 00:18:45.845 | 99.00th=[ 506], 99.50th=[ 611], 99.90th=[ 3032], 99.95th=[ 4047], 00:18:45.845 | 99.99th=[ 4047] 00:18:45.845 write: IOPS=1799, BW=7197KiB/s (7370kB/s)(7204KiB/1001msec); 0 zone resets 00:18:45.845 slat (nsec): min=6997, max=55705, avg=12801.29, stdev=6716.34 00:18:45.845 clat (usec): min=188, max=380, avg=222.79, stdev=24.13 00:18:45.845 lat (usec): min=196, max=412, avg=235.59, stdev=29.20 00:18:45.845 clat percentiles (usec): 00:18:45.845 | 1.00th=[ 194], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 206], 00:18:45.845 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 221], 00:18:45.845 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 273], 00:18:45.845 | 99.00th=[ 306], 99.50th=[ 334], 99.90th=[ 371], 99.95th=[ 383], 00:18:45.845 | 99.99th=[ 383] 00:18:45.845 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:18:45.845 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:45.845 lat (usec) : 250=47.89%, 500=51.57%, 750=0.45% 00:18:45.845 lat (msec) : 4=0.06%, 10=0.03% 00:18:45.845 cpu : usr=3.90%, sys=4.40%, ctx=3337, majf=0, minf=2 00:18:45.845 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:45.845 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.845 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:45.845 issued rwts: total=1536,1801,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:45.845 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:45.845 00:18:45.845 Run status group 0 (all jobs): 00:18:45.845 READ: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:18:45.845 WRITE: bw=7197KiB/s (7370kB/s), 7197KiB/s-7197KiB/s (7370kB/s-7370kB/s), io=7204KiB (7377kB), run=1001-1001msec 00:18:45.845 00:18:45.845 Disk stats (read/write): 00:18:45.845 nvme0n1: ios=1523/1536, merge=0/0, ticks=543/314, in_queue=857, util=92.08% 00:18:45.845 03:06:40 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:45.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:45.845 03:06:40 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:45.845 03:06:40 -- common/autotest_common.sh@1198 -- # local i=0 00:18:45.845 03:06:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:45.845 03:06:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.845 03:06:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:45.845 03:06:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:45.845 03:06:40 -- common/autotest_common.sh@1210 -- # return 0 00:18:45.845 03:06:40 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:45.845 03:06:40 -- target/nmic.sh@53 -- # nvmftestfini 00:18:45.845 03:06:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:45.845 03:06:40 -- nvmf/common.sh@116 -- # sync 00:18:45.845 03:06:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:45.845 03:06:40 -- nvmf/common.sh@119 -- # set +e 00:18:45.845 03:06:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:45.845 03:06:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:45.845 rmmod nvme_tcp 00:18:45.845 rmmod nvme_fabrics 00:18:45.845 rmmod nvme_keyring 00:18:45.845 03:06:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:45.845 03:06:40 -- nvmf/common.sh@123 -- # set -e 00:18:45.845 03:06:40 -- nvmf/common.sh@124 -- # return 0 00:18:45.845 03:06:40 -- nvmf/common.sh@477 -- # '[' -n 2013869 ']' 00:18:45.845 03:06:40 -- nvmf/common.sh@478 -- # killprocess 2013869 00:18:45.845 03:06:40 -- common/autotest_common.sh@926 -- # '[' -z 2013869 ']' 00:18:45.845 03:06:40 -- common/autotest_common.sh@930 -- # kill -0 2013869 00:18:45.845 03:06:40 -- common/autotest_common.sh@931 -- # uname 00:18:45.845 03:06:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:45.846 03:06:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2013869 00:18:45.846 03:06:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:45.846 03:06:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:45.846 03:06:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2013869' 00:18:45.846 killing process with pid 2013869 00:18:45.846 03:06:40 -- common/autotest_common.sh@945 -- # kill 2013869 00:18:45.846 03:06:40 -- common/autotest_common.sh@950 -- # wait 2013869 00:18:46.106 03:06:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:46.106 03:06:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:46.106 03:06:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:46.106 03:06:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.106 03:06:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:46.106 03:06:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.106 03:06:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.106 03:06:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.014 03:06:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:48.014 00:18:48.014 real 0m10.377s 00:18:48.014 user 0m25.229s 00:18:48.014 sys 0m2.323s 00:18:48.014 03:06:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.014 03:06:43 -- common/autotest_common.sh@10 -- # set +x 00:18:48.014 ************************************ 00:18:48.014 END TEST nvmf_nmic 00:18:48.014 ************************************ 00:18:48.014 03:06:43 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:48.014 03:06:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:48.014 03:06:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:48.014 03:06:43 -- common/autotest_common.sh@10 -- # set +x 00:18:48.271 ************************************ 00:18:48.271 START TEST nvmf_fio_target 00:18:48.271 ************************************ 00:18:48.271 03:06:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:48.271 * Looking for test storage... 00:18:48.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:48.271 03:06:43 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.271 03:06:43 -- nvmf/common.sh@7 -- # uname -s 00:18:48.271 03:06:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.271 03:06:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.271 03:06:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.271 03:06:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.271 03:06:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.271 03:06:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.271 03:06:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.271 03:06:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.271 03:06:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.271 03:06:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.271 03:06:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.271 03:06:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:48.271 03:06:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.271 03:06:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.271 03:06:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:48.271 03:06:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:48.272 03:06:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.272 03:06:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.272 03:06:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.272 03:06:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:06:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:06:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:06:43 -- paths/export.sh@5 -- # export PATH 00:18:48.272 03:06:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.272 03:06:43 -- nvmf/common.sh@46 -- # : 0 00:18:48.272 03:06:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:48.272 03:06:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:48.272 03:06:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:48.272 03:06:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.272 03:06:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.272 03:06:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:48.272 03:06:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:48.272 03:06:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:48.272 03:06:43 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.272 03:06:43 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.272 03:06:43 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:48.272 03:06:43 -- target/fio.sh@16 -- # nvmftestinit 00:18:48.272 03:06:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:48.272 03:06:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.272 03:06:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:48.272 03:06:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:48.272 03:06:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:48.272 03:06:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.272 03:06:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.272 03:06:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.272 03:06:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:48.272 03:06:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:48.272 03:06:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:48.272 03:06:43 -- common/autotest_common.sh@10 -- # set +x 00:18:50.175 03:06:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:50.175 03:06:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:50.175 03:06:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:50.175 03:06:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:50.175 03:06:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:50.175 03:06:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:50.175 03:06:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:50.175 03:06:45 -- nvmf/common.sh@294 -- # net_devs=() 00:18:50.175 03:06:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:50.175 03:06:45 -- nvmf/common.sh@295 -- # e810=() 00:18:50.175 03:06:45 -- nvmf/common.sh@295 -- # local -ga e810 00:18:50.175 03:06:45 -- nvmf/common.sh@296 -- # x722=() 00:18:50.175 03:06:45 -- nvmf/common.sh@296 -- # local -ga x722 00:18:50.175 03:06:45 -- nvmf/common.sh@297 -- # mlx=() 00:18:50.175 03:06:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:50.175 03:06:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.175 03:06:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:50.175 03:06:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:50.175 03:06:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:50.175 03:06:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:50.175 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:50.175 03:06:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:50.175 03:06:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:50.175 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:50.175 03:06:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:50.175 03:06:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.175 03:06:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.175 03:06:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:50.175 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:50.175 03:06:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.175 03:06:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:50.175 03:06:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.175 03:06:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.175 03:06:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:50.175 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:50.175 03:06:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.175 03:06:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:50.175 03:06:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:50.175 03:06:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:50.175 03:06:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:50.175 03:06:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.175 03:06:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:50.175 03:06:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:50.175 03:06:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:50.175 03:06:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:50.175 03:06:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:50.175 03:06:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:50.175 03:06:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.175 03:06:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:50.175 03:06:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:50.175 03:06:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:50.175 03:06:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:50.434 03:06:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:50.434 03:06:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:50.434 03:06:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:50.434 03:06:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:50.434 03:06:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:50.434 03:06:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:50.434 03:06:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:50.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:18:50.434 00:18:50.434 --- 10.0.0.2 ping statistics --- 00:18:50.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.434 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:18:50.434 03:06:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:50.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:18:50.434 00:18:50.434 --- 10.0.0.1 ping statistics --- 00:18:50.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.434 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:18:50.434 03:06:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.434 03:06:45 -- nvmf/common.sh@410 -- # return 0 00:18:50.434 03:06:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:50.434 03:06:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.434 03:06:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:50.434 03:06:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:50.435 03:06:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.435 03:06:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:50.435 03:06:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:50.435 03:06:45 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:50.435 03:06:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:50.435 03:06:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:50.435 03:06:45 -- common/autotest_common.sh@10 -- # set +x 00:18:50.435 03:06:45 -- nvmf/common.sh@469 -- # nvmfpid=2016724 00:18:50.435 03:06:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:50.435 03:06:45 -- nvmf/common.sh@470 -- # waitforlisten 2016724 00:18:50.435 03:06:45 -- common/autotest_common.sh@819 -- # '[' -z 2016724 ']' 00:18:50.435 03:06:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.435 03:06:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:50.435 03:06:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.435 03:06:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:50.435 03:06:45 -- common/autotest_common.sh@10 -- # set +x 00:18:50.435 [2024-07-14 03:06:45.612828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:50.435 [2024-07-14 03:06:45.612933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.435 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.435 [2024-07-14 03:06:45.683899] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:50.694 [2024-07-14 03:06:45.776627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:50.695 [2024-07-14 03:06:45.776801] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.695 [2024-07-14 03:06:45.776822] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.695 [2024-07-14 03:06:45.776837] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.695 [2024-07-14 03:06:45.776912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.695 [2024-07-14 03:06:45.776966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.695 [2024-07-14 03:06:45.777019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:50.695 [2024-07-14 03:06:45.777022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.628 03:06:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:51.628 03:06:46 -- common/autotest_common.sh@852 -- # return 0 00:18:51.628 03:06:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:51.628 03:06:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:51.628 03:06:46 -- common/autotest_common.sh@10 -- # set +x 00:18:51.628 03:06:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.628 03:06:46 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:51.628 [2024-07-14 03:06:46.782096] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.628 03:06:46 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.885 03:06:47 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:51.885 03:06:47 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.143 03:06:47 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:52.143 03:06:47 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.402 03:06:47 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:52.402 03:06:47 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:52.660 03:06:47 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:52.660 03:06:47 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:52.919 03:06:48 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.179 03:06:48 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:53.179 03:06:48 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.499 03:06:48 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:53.499 03:06:48 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:53.758 03:06:48 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:53.758 03:06:48 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:54.017 03:06:49 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:54.276 03:06:49 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:54.276 03:06:49 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:54.276 03:06:49 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:54.276 03:06:49 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:54.534 03:06:49 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:54.792 [2024-07-14 03:06:49.969250] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:54.792 03:06:49 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:55.051 03:06:50 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:55.311 03:06:50 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:55.878 03:06:51 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:55.878 03:06:51 -- common/autotest_common.sh@1177 -- # local i=0 00:18:55.878 03:06:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.878 03:06:51 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:18:55.878 03:06:51 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:18:55.878 03:06:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:58.412 03:06:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:58.412 03:06:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:58.412 03:06:53 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:58.412 03:06:53 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:18:58.412 03:06:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.412 03:06:53 -- common/autotest_common.sh@1187 -- # return 0 00:18:58.412 03:06:53 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:58.412 [global] 00:18:58.412 thread=1 00:18:58.412 invalidate=1 00:18:58.412 rw=write 00:18:58.412 time_based=1 00:18:58.412 runtime=1 00:18:58.412 ioengine=libaio 00:18:58.412 direct=1 00:18:58.412 bs=4096 00:18:58.412 iodepth=1 00:18:58.412 norandommap=0 00:18:58.412 numjobs=1 00:18:58.412 00:18:58.412 verify_dump=1 00:18:58.412 verify_backlog=512 00:18:58.412 verify_state_save=0 00:18:58.412 do_verify=1 00:18:58.412 verify=crc32c-intel 00:18:58.412 [job0] 00:18:58.412 filename=/dev/nvme0n1 00:18:58.412 [job1] 00:18:58.412 filename=/dev/nvme0n2 00:18:58.413 [job2] 00:18:58.413 filename=/dev/nvme0n3 00:18:58.413 [job3] 00:18:58.413 filename=/dev/nvme0n4 00:18:58.413 Could not set queue depth (nvme0n1) 00:18:58.413 Could not set queue depth (nvme0n2) 00:18:58.413 Could not set queue depth (nvme0n3) 00:18:58.413 Could not set queue depth (nvme0n4) 00:18:58.413 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.413 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.413 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.413 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.413 fio-3.35 00:18:58.413 Starting 4 threads 00:18:59.350 00:18:59.351 job0: (groupid=0, jobs=1): err= 0: pid=2017733: Sun Jul 14 03:06:54 2024 00:18:59.351 read: IOPS=42, BW=172KiB/s (176kB/s)(172KiB/1001msec) 00:18:59.351 slat (nsec): min=8725, max=29245, avg=13014.58, stdev=4577.53 00:18:59.351 clat (usec): min=391, max=44008, avg=19370.03, stdev=20549.18 00:18:59.351 lat (usec): min=400, max=44026, avg=19383.05, stdev=20551.80 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 392], 5.00th=[ 400], 10.00th=[ 400], 20.00th=[ 408], 00:18:59.351 | 30.00th=[ 412], 40.00th=[ 420], 50.00th=[ 611], 60.00th=[41157], 00:18:59.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:59.351 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:18:59.351 | 99.99th=[43779] 00:18:59.351 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:59.351 slat (usec): min=7, max=18851, avg=63.94, stdev=854.63 00:18:59.351 clat (usec): min=207, max=1719, avg=257.34, stdev=71.18 00:18:59.351 lat (usec): min=219, max=19207, avg=321.29, stdev=862.08 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 239], 00:18:59.351 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:18:59.351 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:18:59.351 | 99.00th=[ 396], 99.50th=[ 502], 99.90th=[ 1713], 99.95th=[ 1713], 00:18:59.351 | 99.99th=[ 1713] 00:18:59.351 bw ( KiB/s): min= 4096, max= 4096, per=26.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.351 lat (usec) : 250=45.95%, 500=49.37%, 750=0.90% 00:18:59.351 lat (msec) : 2=0.18%, 50=3.60% 00:18:59.351 cpu : usr=0.60%, sys=1.30%, ctx=558, majf=0, minf=1 00:18:59.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 issued rwts: total=43,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.351 job1: (groupid=0, jobs=1): err= 0: pid=2017745: Sun Jul 14 03:06:54 2024 00:18:59.351 read: IOPS=1186, BW=4747KiB/s (4861kB/s)(4752KiB/1001msec) 00:18:59.351 slat (nsec): min=5874, max=61416, avg=15832.38, stdev=7792.73 00:18:59.351 clat (usec): min=300, max=718, avg=494.57, stdev=60.12 00:18:59.351 lat (usec): min=316, max=750, avg=510.40, stdev=61.57 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 318], 5.00th=[ 396], 10.00th=[ 433], 20.00th=[ 457], 00:18:59.351 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 494], 60.00th=[ 510], 00:18:59.351 | 70.00th=[ 523], 80.00th=[ 537], 90.00th=[ 562], 95.00th=[ 586], 00:18:59.351 | 99.00th=[ 660], 99.50th=[ 676], 99.90th=[ 709], 99.95th=[ 717], 00:18:59.351 | 99.99th=[ 717] 00:18:59.351 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:59.351 slat (nsec): min=5288, max=54951, avg=11130.18, stdev=6944.32 00:18:59.351 clat (usec): min=185, max=455, avg=238.49, stdev=41.24 00:18:59.351 lat (usec): min=191, max=484, avg=249.62, stdev=44.35 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:18:59.351 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 239], 00:18:59.351 | 70.00th=[ 249], 80.00th=[ 258], 90.00th=[ 277], 95.00th=[ 318], 00:18:59.351 | 99.00th=[ 400], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 457], 00:18:59.351 | 99.99th=[ 457] 00:18:59.351 bw ( KiB/s): min= 6472, max= 6472, per=42.34%, avg=6472.00, stdev= 0.00, samples=1 00:18:59.351 iops : min= 1618, max= 1618, avg=1618.00, stdev= 0.00, samples=1 00:18:59.351 lat (usec) : 250=40.79%, 500=38.84%, 750=20.37% 00:18:59.351 cpu : usr=1.90%, sys=3.90%, ctx=2724, majf=0, minf=2 00:18:59.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 issued rwts: total=1188,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.351 job2: (groupid=0, jobs=1): err= 0: pid=2017795: Sun Jul 14 03:06:54 2024 00:18:59.351 read: IOPS=84, BW=340KiB/s (348kB/s)(340KiB/1001msec) 00:18:59.351 slat (nsec): min=6278, max=23920, avg=11911.11, stdev=3446.73 00:18:59.351 clat (usec): min=353, max=42021, avg=9500.80, stdev=16636.53 00:18:59.351 lat (usec): min=371, max=42034, avg=9512.72, stdev=16637.95 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 355], 5.00th=[ 453], 10.00th=[ 490], 20.00th=[ 553], 00:18:59.351 | 30.00th=[ 619], 40.00th=[ 701], 50.00th=[ 783], 60.00th=[ 799], 00:18:59.351 | 70.00th=[ 807], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:18:59.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:59.351 | 99.99th=[42206] 00:18:59.351 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:59.351 slat (nsec): min=7898, max=76982, avg=23016.31, stdev=12161.50 00:18:59.351 clat (usec): min=230, max=602, avg=345.66, stdev=73.67 00:18:59.351 lat (usec): min=239, max=616, avg=368.68, stdev=76.26 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 241], 5.00th=[ 258], 10.00th=[ 265], 20.00th=[ 277], 00:18:59.351 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 330], 60.00th=[ 359], 00:18:59.351 | 70.00th=[ 392], 80.00th=[ 416], 90.00th=[ 445], 95.00th=[ 478], 00:18:59.351 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 603], 99.95th=[ 603], 00:18:59.351 | 99.99th=[ 603] 00:18:59.351 bw ( KiB/s): min= 4096, max= 4096, per=26.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.351 lat (usec) : 250=1.84%, 500=83.25%, 750=6.70%, 1000=4.69% 00:18:59.351 lat (msec) : 2=0.17%, 4=0.17%, 50=3.18% 00:18:59.351 cpu : usr=0.60%, sys=1.80%, ctx=597, majf=0, minf=1 00:18:59.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 issued rwts: total=85,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.351 job3: (groupid=0, jobs=1): err= 0: pid=2017806: Sun Jul 14 03:06:54 2024 00:18:59.351 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:18:59.351 slat (nsec): min=4751, max=70428, avg=15800.44, stdev=7627.50 00:18:59.351 clat (usec): min=392, max=41657, avg=556.81, stdev=1287.20 00:18:59.351 lat (usec): min=405, max=41670, avg=572.61, stdev=1287.15 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 412], 5.00th=[ 433], 10.00th=[ 441], 20.00th=[ 453], 00:18:59.351 | 30.00th=[ 465], 40.00th=[ 490], 50.00th=[ 515], 60.00th=[ 537], 00:18:59.351 | 70.00th=[ 553], 80.00th=[ 578], 90.00th=[ 603], 95.00th=[ 619], 00:18:59.351 | 99.00th=[ 676], 99.50th=[ 685], 99.90th=[ 717], 99.95th=[41681], 00:18:59.351 | 99.99th=[41681] 00:18:59.351 write: IOPS=1263, BW=5055KiB/s (5176kB/s)(5060KiB/1001msec); 0 zone resets 00:18:59.351 slat (nsec): min=5817, max=71252, avg=15627.17, stdev=10350.82 00:18:59.351 clat (usec): min=197, max=1298, avg=303.83, stdev=81.60 00:18:59.351 lat (usec): min=209, max=1316, avg=319.46, stdev=85.68 00:18:59.351 clat percentiles (usec): 00:18:59.351 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 247], 00:18:59.351 | 30.00th=[ 258], 40.00th=[ 269], 50.00th=[ 285], 60.00th=[ 302], 00:18:59.351 | 70.00th=[ 322], 80.00th=[ 363], 90.00th=[ 400], 95.00th=[ 437], 00:18:59.351 | 99.00th=[ 529], 99.50th=[ 603], 99.90th=[ 1205], 99.95th=[ 1303], 00:18:59.351 | 99.99th=[ 1303] 00:18:59.351 bw ( KiB/s): min= 4096, max= 4096, per=26.80%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.351 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.351 lat (usec) : 250=12.80%, 500=61.25%, 750=25.78% 00:18:59.351 lat (msec) : 2=0.13%, 50=0.04% 00:18:59.351 cpu : usr=1.60%, sys=4.00%, ctx=2289, majf=0, minf=1 00:18:59.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.351 issued rwts: total=1024,1265,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.351 00:18:59.351 Run status group 0 (all jobs): 00:18:59.351 READ: bw=9351KiB/s (9575kB/s), 172KiB/s-4747KiB/s (176kB/s-4861kB/s), io=9360KiB (9585kB), run=1001-1001msec 00:18:59.351 WRITE: bw=14.9MiB/s (15.7MB/s), 2046KiB/s-6138KiB/s (2095kB/s-6285kB/s), io=14.9MiB (15.7MB), run=1001-1001msec 00:18:59.351 00:18:59.351 Disk stats (read/write): 00:18:59.351 nvme0n1: ios=59/512, merge=0/0, ticks=988/121, in_queue=1109, util=97.80% 00:18:59.351 nvme0n2: ios=1045/1134, merge=0/0, ticks=518/261, in_queue=779, util=85.95% 00:18:59.351 nvme0n3: ios=26/512, merge=0/0, ticks=625/162, in_queue=787, util=88.58% 00:18:59.351 nvme0n4: ios=894/1024, merge=0/0, ticks=778/284, in_queue=1062, util=98.93% 00:18:59.351 03:06:54 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:59.351 [global] 00:18:59.351 thread=1 00:18:59.351 invalidate=1 00:18:59.351 rw=randwrite 00:18:59.351 time_based=1 00:18:59.351 runtime=1 00:18:59.351 ioengine=libaio 00:18:59.351 direct=1 00:18:59.351 bs=4096 00:18:59.351 iodepth=1 00:18:59.351 norandommap=0 00:18:59.351 numjobs=1 00:18:59.351 00:18:59.351 verify_dump=1 00:18:59.351 verify_backlog=512 00:18:59.351 verify_state_save=0 00:18:59.351 do_verify=1 00:18:59.351 verify=crc32c-intel 00:18:59.351 [job0] 00:18:59.351 filename=/dev/nvme0n1 00:18:59.351 [job1] 00:18:59.351 filename=/dev/nvme0n2 00:18:59.351 [job2] 00:18:59.351 filename=/dev/nvme0n3 00:18:59.351 [job3] 00:18:59.351 filename=/dev/nvme0n4 00:18:59.351 Could not set queue depth (nvme0n1) 00:18:59.351 Could not set queue depth (nvme0n2) 00:18:59.351 Could not set queue depth (nvme0n3) 00:18:59.351 Could not set queue depth (nvme0n4) 00:18:59.611 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.611 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.611 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.611 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:59.611 fio-3.35 00:18:59.611 Starting 4 threads 00:19:00.987 00:19:00.987 job0: (groupid=0, jobs=1): err= 0: pid=2018084: Sun Jul 14 03:06:55 2024 00:19:00.987 read: IOPS=1505, BW=6022KiB/s (6167kB/s)(6028KiB/1001msec) 00:19:00.987 slat (nsec): min=5447, max=63431, avg=10065.86, stdev=5710.70 00:19:00.987 clat (usec): min=304, max=42105, avg=386.16, stdev=1077.44 00:19:00.987 lat (usec): min=310, max=42119, avg=396.23, stdev=1077.61 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:19:00.987 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:19:00.987 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 404], 95.00th=[ 429], 00:19:00.987 | 99.00th=[ 545], 99.50th=[ 750], 99.90th=[ 1500], 99.95th=[42206], 00:19:00.987 | 99.99th=[42206] 00:19:00.987 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:19:00.987 slat (nsec): min=6854, max=57269, avg=13383.38, stdev=8495.50 00:19:00.987 clat (usec): min=192, max=1159, avg=242.34, stdev=50.71 00:19:00.987 lat (usec): min=199, max=1201, avg=255.72, stdev=54.83 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:19:00.987 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:19:00.987 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 314], 00:19:00.987 | 99.00th=[ 424], 99.50th=[ 461], 99.90th=[ 865], 99.95th=[ 1156], 00:19:00.987 | 99.99th=[ 1156] 00:19:00.987 bw ( KiB/s): min= 7480, max= 7480, per=44.21%, avg=7480.00, stdev= 0.00, samples=1 00:19:00.987 iops : min= 1870, max= 1870, avg=1870.00, stdev= 0.00, samples=1 00:19:00.987 lat (usec) : 250=34.31%, 500=64.77%, 750=0.59%, 1000=0.13% 00:19:00.987 lat (msec) : 2=0.16%, 50=0.03% 00:19:00.987 cpu : usr=2.60%, sys=5.00%, ctx=3045, majf=0, minf=1 00:19:00.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 issued rwts: total=1507,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.987 job1: (groupid=0, jobs=1): err= 0: pid=2018085: Sun Jul 14 03:06:55 2024 00:19:00.987 read: IOPS=699, BW=2797KiB/s (2864kB/s)(2800KiB/1001msec) 00:19:00.987 slat (nsec): min=5811, max=33933, avg=9571.56, stdev=4845.77 00:19:00.987 clat (usec): min=316, max=41126, avg=979.77, stdev=4814.78 00:19:00.987 lat (usec): min=322, max=41142, avg=989.34, stdev=4815.46 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 322], 5.00th=[ 330], 10.00th=[ 334], 20.00th=[ 343], 00:19:00.987 | 30.00th=[ 355], 40.00th=[ 400], 50.00th=[ 408], 60.00th=[ 416], 00:19:00.987 | 70.00th=[ 424], 80.00th=[ 441], 90.00th=[ 461], 95.00th=[ 486], 00:19:00.987 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:00.987 | 99.99th=[41157] 00:19:00.987 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:00.987 slat (nsec): min=7258, max=70846, avg=18093.49, stdev=12690.51 00:19:00.987 clat (usec): min=194, max=517, avg=275.21, stdev=68.27 00:19:00.987 lat (usec): min=202, max=556, avg=293.31, stdev=77.71 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:19:00.987 | 30.00th=[ 223], 40.00th=[ 231], 50.00th=[ 243], 60.00th=[ 273], 00:19:00.987 | 70.00th=[ 306], 80.00th=[ 343], 90.00th=[ 388], 95.00th=[ 404], 00:19:00.987 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 506], 99.95th=[ 519], 00:19:00.987 | 99.99th=[ 519] 00:19:00.987 bw ( KiB/s): min= 4096, max= 4096, per=24.21%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.987 lat (usec) : 250=30.97%, 500=67.17%, 750=1.10%, 1000=0.06% 00:19:00.987 lat (msec) : 2=0.12%, 50=0.58% 00:19:00.987 cpu : usr=2.20%, sys=3.00%, ctx=1725, majf=0, minf=1 00:19:00.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 issued rwts: total=700,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.987 job2: (groupid=0, jobs=1): err= 0: pid=2018086: Sun Jul 14 03:06:55 2024 00:19:00.987 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:00.987 slat (nsec): min=5952, max=35804, avg=9834.00, stdev=5169.59 00:19:00.987 clat (usec): min=307, max=41428, avg=588.59, stdev=3109.45 00:19:00.987 lat (usec): min=314, max=41435, avg=598.43, stdev=3109.58 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 314], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 326], 00:19:00.987 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:19:00.987 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 400], 00:19:00.987 | 99.00th=[ 857], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:19:00.987 | 99.99th=[41681] 00:19:00.987 write: IOPS=1278, BW=5115KiB/s (5238kB/s)(5120KiB/1001msec); 0 zone resets 00:19:00.987 slat (nsec): min=7367, max=76745, avg=17994.96, stdev=13191.59 00:19:00.987 clat (usec): min=199, max=546, avg=277.44, stdev=78.95 00:19:00.987 lat (usec): min=207, max=577, avg=295.44, stdev=88.96 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 204], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 217], 00:19:00.987 | 30.00th=[ 223], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 251], 00:19:00.987 | 70.00th=[ 302], 80.00th=[ 363], 90.00th=[ 404], 95.00th=[ 437], 00:19:00.987 | 99.00th=[ 494], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 545], 00:19:00.987 | 99.99th=[ 545] 00:19:00.987 bw ( KiB/s): min= 4096, max= 4096, per=24.21%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.987 lat (usec) : 250=32.90%, 500=65.93%, 750=0.61%, 1000=0.22% 00:19:00.987 lat (msec) : 2=0.09%, 50=0.26% 00:19:00.987 cpu : usr=2.20%, sys=4.60%, ctx=2305, majf=0, minf=1 00:19:00.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 issued rwts: total=1024,1280,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.987 job3: (groupid=0, jobs=1): err= 0: pid=2018087: Sun Jul 14 03:06:55 2024 00:19:00.987 read: IOPS=22, BW=89.4KiB/s (91.6kB/s)(92.0KiB/1029msec) 00:19:00.987 slat (nsec): min=13189, max=36139, avg=17033.57, stdev=6022.84 00:19:00.987 clat (usec): min=500, max=41984, avg=37605.48, stdev=11710.81 00:19:00.987 lat (usec): min=515, max=41998, avg=37622.52, stdev=11710.69 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 502], 5.00th=[ 510], 10.00th=[40633], 20.00th=[41157], 00:19:00.987 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:00.987 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:00.987 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:00.987 | 99.99th=[42206] 00:19:00.987 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:19:00.987 slat (nsec): min=8348, max=75720, avg=23653.69, stdev=10562.16 00:19:00.987 clat (usec): min=222, max=675, avg=289.39, stdev=44.72 00:19:00.987 lat (usec): min=231, max=685, avg=313.05, stdev=46.99 00:19:00.987 clat percentiles (usec): 00:19:00.987 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 239], 20.00th=[ 251], 00:19:00.987 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 293], 00:19:00.987 | 70.00th=[ 306], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 363], 00:19:00.987 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 676], 99.95th=[ 676], 00:19:00.987 | 99.99th=[ 676] 00:19:00.987 bw ( KiB/s): min= 4096, max= 4096, per=24.21%, avg=4096.00, stdev= 0.00, samples=1 00:19:00.987 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:00.987 lat (usec) : 250=18.32%, 500=77.20%, 750=0.56% 00:19:00.987 lat (msec) : 50=3.93% 00:19:00.987 cpu : usr=0.97%, sys=0.78%, ctx=536, majf=0, minf=2 00:19:00.987 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:00.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.987 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.987 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:00.987 00:19:00.987 Run status group 0 (all jobs): 00:19:00.987 READ: bw=12.4MiB/s (13.0MB/s), 89.4KiB/s-6022KiB/s (91.6kB/s-6167kB/s), io=12.7MiB (13.3MB), run=1001-1029msec 00:19:00.987 WRITE: bw=16.5MiB/s (17.3MB/s), 1990KiB/s-6138KiB/s (2038kB/s-6285kB/s), io=17.0MiB (17.8MB), run=1001-1029msec 00:19:00.987 00:19:00.987 Disk stats (read/write): 00:19:00.987 nvme0n1: ios=1105/1536, merge=0/0, ticks=1377/354, in_queue=1731, util=94.09% 00:19:00.988 nvme0n2: ios=558/743, merge=0/0, ticks=1013/202, in_queue=1215, util=97.87% 00:19:00.988 nvme0n3: ios=844/1024, merge=0/0, ticks=972/264, in_queue=1236, util=98.12% 00:19:00.988 nvme0n4: ios=44/512, merge=0/0, ticks=1608/146, in_queue=1754, util=97.79% 00:19:00.988 03:06:55 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:00.988 [global] 00:19:00.988 thread=1 00:19:00.988 invalidate=1 00:19:00.988 rw=write 00:19:00.988 time_based=1 00:19:00.988 runtime=1 00:19:00.988 ioengine=libaio 00:19:00.988 direct=1 00:19:00.988 bs=4096 00:19:00.988 iodepth=128 00:19:00.988 norandommap=0 00:19:00.988 numjobs=1 00:19:00.988 00:19:00.988 verify_dump=1 00:19:00.988 verify_backlog=512 00:19:00.988 verify_state_save=0 00:19:00.988 do_verify=1 00:19:00.988 verify=crc32c-intel 00:19:00.988 [job0] 00:19:00.988 filename=/dev/nvme0n1 00:19:00.988 [job1] 00:19:00.988 filename=/dev/nvme0n2 00:19:00.988 [job2] 00:19:00.988 filename=/dev/nvme0n3 00:19:00.988 [job3] 00:19:00.988 filename=/dev/nvme0n4 00:19:00.988 Could not set queue depth (nvme0n1) 00:19:00.988 Could not set queue depth (nvme0n2) 00:19:00.988 Could not set queue depth (nvme0n3) 00:19:00.988 Could not set queue depth (nvme0n4) 00:19:00.988 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.988 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.988 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.988 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.988 fio-3.35 00:19:00.988 Starting 4 threads 00:19:02.364 00:19:02.364 job0: (groupid=0, jobs=1): err= 0: pid=2018315: Sun Jul 14 03:06:57 2024 00:19:02.364 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:19:02.364 slat (usec): min=2, max=15650, avg=130.76, stdev=929.32 00:19:02.364 clat (usec): min=4540, max=60143, avg=17315.84, stdev=10251.74 00:19:02.364 lat (usec): min=4548, max=60157, avg=17446.59, stdev=10338.98 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[ 9896], 00:19:02.364 | 30.00th=[10552], 40.00th=[11731], 50.00th=[13173], 60.00th=[16319], 00:19:02.364 | 70.00th=[19006], 80.00th=[21627], 90.00th=[32900], 95.00th=[43254], 00:19:02.364 | 99.00th=[52691], 99.50th=[52691], 99.90th=[54264], 99.95th=[59507], 00:19:02.364 | 99.99th=[60031] 00:19:02.364 write: IOPS=3811, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1005msec); 0 zone resets 00:19:02.364 slat (usec): min=3, max=17070, avg=102.98, stdev=661.24 00:19:02.364 clat (usec): min=1016, max=108533, avg=16965.94, stdev=15712.57 00:19:02.364 lat (usec): min=1435, max=108541, avg=17068.91, stdev=15767.99 00:19:02.364 clat percentiles (msec): 00:19:02.364 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:19:02.364 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 13], 00:19:02.364 | 70.00th=[ 15], 80.00th=[ 21], 90.00th=[ 35], 95.00th=[ 45], 00:19:02.364 | 99.00th=[ 102], 99.50th=[ 105], 99.90th=[ 109], 99.95th=[ 109], 00:19:02.364 | 99.99th=[ 109] 00:19:02.364 bw ( KiB/s): min= 9784, max=19848, per=23.64%, avg=14816.00, stdev=7116.32, samples=2 00:19:02.364 iops : min= 2446, max= 4962, avg=3704.00, stdev=1779.08, samples=2 00:19:02.364 lat (msec) : 2=0.13%, 4=0.04%, 10=28.75%, 20=48.91%, 50=19.34% 00:19:02.364 lat (msec) : 100=2.27%, 250=0.55% 00:19:02.364 cpu : usr=2.89%, sys=4.58%, ctx=368, majf=0, minf=11 00:19:02.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.364 issued rwts: total=3584,3831,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.364 job1: (groupid=0, jobs=1): err= 0: pid=2018316: Sun Jul 14 03:06:57 2024 00:19:02.364 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:19:02.364 slat (usec): min=3, max=43749, avg=92.70, stdev=791.96 00:19:02.364 clat (usec): min=4597, max=58133, avg=10819.53, stdev=5370.23 00:19:02.364 lat (usec): min=4625, max=58144, avg=10912.23, stdev=5420.68 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 8160], 00:19:02.364 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10290], 00:19:02.364 | 70.00th=[11338], 80.00th=[12518], 90.00th=[14877], 95.00th=[17171], 00:19:02.364 | 99.00th=[26870], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:19:02.364 | 99.99th=[57934] 00:19:02.364 write: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1004msec); 0 zone resets 00:19:02.364 slat (usec): min=4, max=18570, avg=90.23, stdev=554.39 00:19:02.364 clat (usec): min=2563, max=59453, avg=13440.57, stdev=9771.30 00:19:02.364 lat (usec): min=2929, max=59463, avg=13530.80, stdev=9808.60 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 3556], 5.00th=[ 4686], 10.00th=[ 5604], 20.00th=[ 7177], 00:19:02.364 | 30.00th=[ 8848], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11600], 00:19:02.364 | 70.00th=[14484], 80.00th=[17171], 90.00th=[21627], 95.00th=[29230], 00:19:02.364 | 99.00th=[58459], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:19:02.364 | 99.99th=[59507] 00:19:02.364 bw ( KiB/s): min=20288, max=21136, per=33.05%, avg=20712.00, stdev=599.63, samples=2 00:19:02.364 iops : min= 5072, max= 5284, avg=5178.00, stdev=149.91, samples=2 00:19:02.364 lat (msec) : 4=1.09%, 10=48.43%, 20=42.20%, 50=6.46%, 100=1.82% 00:19:02.364 cpu : usr=5.38%, sys=7.98%, ctx=548, majf=0, minf=9 00:19:02.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.364 issued rwts: total=5120,5305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.364 job2: (groupid=0, jobs=1): err= 0: pid=2018317: Sun Jul 14 03:06:57 2024 00:19:02.364 read: IOPS=2215, BW=8861KiB/s (9074kB/s)(8932KiB/1008msec) 00:19:02.364 slat (usec): min=2, max=34111, avg=230.15, stdev=1865.15 00:19:02.364 clat (usec): min=1078, max=90404, avg=30626.54, stdev=16230.75 00:19:02.364 lat (usec): min=3107, max=90417, avg=30856.69, stdev=16363.08 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 4293], 5.00th=[12256], 10.00th=[15008], 20.00th=[16188], 00:19:02.364 | 30.00th=[17695], 40.00th=[22676], 50.00th=[26870], 60.00th=[31065], 00:19:02.364 | 70.00th=[39060], 80.00th=[45351], 90.00th=[56361], 95.00th=[59507], 00:19:02.364 | 99.00th=[78119], 99.50th=[83362], 99.90th=[83362], 99.95th=[83362], 00:19:02.364 | 99.99th=[90702] 00:19:02.364 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:19:02.364 slat (usec): min=3, max=27818, avg=178.52, stdev=1241.87 00:19:02.364 clat (usec): min=992, max=107063, avg=23253.28, stdev=17496.15 00:19:02.364 lat (usec): min=1038, max=107071, avg=23431.79, stdev=17578.62 00:19:02.364 clat percentiles (msec): 00:19:02.364 | 1.00th=[ 8], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:19:02.364 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 20], 00:19:02.364 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 43], 95.00th=[ 55], 00:19:02.364 | 99.00th=[ 105], 99.50th=[ 108], 99.90th=[ 108], 99.95th=[ 108], 00:19:02.364 | 99.99th=[ 108] 00:19:02.364 bw ( KiB/s): min= 8192, max=12288, per=16.34%, avg=10240.00, stdev=2896.31, samples=2 00:19:02.364 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:02.364 lat (usec) : 1000=0.02% 00:19:02.364 lat (msec) : 2=0.02%, 4=0.33%, 10=3.36%, 20=45.94%, 50=40.85% 00:19:02.364 lat (msec) : 100=8.85%, 250=0.63% 00:19:02.364 cpu : usr=1.39%, sys=2.88%, ctx=228, majf=0, minf=13 00:19:02.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.364 issued rwts: total=2233,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.364 job3: (groupid=0, jobs=1): err= 0: pid=2018318: Sun Jul 14 03:06:57 2024 00:19:02.364 read: IOPS=3602, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1008msec) 00:19:02.364 slat (usec): min=3, max=14716, avg=124.87, stdev=909.46 00:19:02.364 clat (usec): min=2042, max=46159, avg=16467.13, stdev=6395.67 00:19:02.364 lat (usec): min=6226, max=46168, avg=16592.00, stdev=6439.27 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11076], 00:19:02.364 | 30.00th=[12256], 40.00th=[13435], 50.00th=[14877], 60.00th=[16057], 00:19:02.364 | 70.00th=[18482], 80.00th=[21103], 90.00th=[26346], 95.00th=[29754], 00:19:02.364 | 99.00th=[33817], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:19:02.364 | 99.99th=[46400] 00:19:02.364 write: IOPS=4063, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1008msec); 0 zone resets 00:19:02.364 slat (usec): min=4, max=21253, avg=126.43, stdev=826.26 00:19:02.364 clat (usec): min=1621, max=59787, avg=16607.15, stdev=10191.79 00:19:02.364 lat (usec): min=1633, max=59797, avg=16733.58, stdev=10247.31 00:19:02.364 clat percentiles (usec): 00:19:02.364 | 1.00th=[ 4359], 5.00th=[ 7308], 10.00th=[ 7963], 20.00th=[ 9110], 00:19:02.364 | 30.00th=[10945], 40.00th=[12256], 50.00th=[13435], 60.00th=[14746], 00:19:02.364 | 70.00th=[17433], 80.00th=[22152], 90.00th=[30016], 95.00th=[42206], 00:19:02.364 | 99.00th=[54264], 99.50th=[55313], 99.90th=[55837], 99.95th=[56886], 00:19:02.364 | 99.99th=[60031] 00:19:02.364 bw ( KiB/s): min=12008, max=20112, per=25.63%, avg=16060.00, stdev=5730.39, samples=2 00:19:02.364 iops : min= 3002, max= 5028, avg=4015.00, stdev=1432.60, samples=2 00:19:02.364 lat (msec) : 2=0.03%, 4=0.34%, 10=17.01%, 20=60.22%, 50=21.47% 00:19:02.364 lat (msec) : 100=0.94% 00:19:02.364 cpu : usr=4.67%, sys=6.16%, ctx=317, majf=0, minf=17 00:19:02.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:02.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.364 issued rwts: total=3631,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.364 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.364 00:19:02.364 Run status group 0 (all jobs): 00:19:02.364 READ: bw=56.5MiB/s (59.2MB/s), 8861KiB/s-19.9MiB/s (9074kB/s-20.9MB/s), io=56.9MiB (59.7MB), run=1004-1008msec 00:19:02.364 WRITE: bw=61.2MiB/s (64.2MB/s), 9.92MiB/s-20.6MiB/s (10.4MB/s-21.6MB/s), io=61.7MiB (64.7MB), run=1004-1008msec 00:19:02.364 00:19:02.364 Disk stats (read/write): 00:19:02.364 nvme0n1: ios=3101/3153, merge=0/0, ticks=33560/28257, in_queue=61817, util=97.19% 00:19:02.364 nvme0n2: ios=4137/4528, merge=0/0, ticks=42031/46764, in_queue=88795, util=99.19% 00:19:02.365 nvme0n3: ios=1975/2048, merge=0/0, ticks=33299/33442, in_queue=66741, util=88.66% 00:19:02.365 nvme0n4: ios=3109/3584, merge=0/0, ticks=48034/55108, in_queue=103142, util=89.61% 00:19:02.365 03:06:57 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:02.365 [global] 00:19:02.365 thread=1 00:19:02.365 invalidate=1 00:19:02.365 rw=randwrite 00:19:02.365 time_based=1 00:19:02.365 runtime=1 00:19:02.365 ioengine=libaio 00:19:02.365 direct=1 00:19:02.365 bs=4096 00:19:02.365 iodepth=128 00:19:02.365 norandommap=0 00:19:02.365 numjobs=1 00:19:02.365 00:19:02.365 verify_dump=1 00:19:02.365 verify_backlog=512 00:19:02.365 verify_state_save=0 00:19:02.365 do_verify=1 00:19:02.365 verify=crc32c-intel 00:19:02.365 [job0] 00:19:02.365 filename=/dev/nvme0n1 00:19:02.365 [job1] 00:19:02.365 filename=/dev/nvme0n2 00:19:02.365 [job2] 00:19:02.365 filename=/dev/nvme0n3 00:19:02.365 [job3] 00:19:02.365 filename=/dev/nvme0n4 00:19:02.365 Could not set queue depth (nvme0n1) 00:19:02.365 Could not set queue depth (nvme0n2) 00:19:02.365 Could not set queue depth (nvme0n3) 00:19:02.365 Could not set queue depth (nvme0n4) 00:19:02.623 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.623 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.623 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.623 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:02.623 fio-3.35 00:19:02.623 Starting 4 threads 00:19:04.001 00:19:04.001 job0: (groupid=0, jobs=1): err= 0: pid=2018556: Sun Jul 14 03:06:58 2024 00:19:04.001 read: IOPS=4906, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1005msec) 00:19:04.001 slat (usec): min=2, max=10152, avg=95.83, stdev=524.14 00:19:04.001 clat (usec): min=607, max=36356, avg=12049.80, stdev=2872.76 00:19:04.001 lat (usec): min=6675, max=36362, avg=12145.63, stdev=2896.60 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:19:04.001 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11731], 00:19:04.001 | 70.00th=[12387], 80.00th=[12911], 90.00th=[14484], 95.00th=[17695], 00:19:04.001 | 99.00th=[22152], 99.50th=[28967], 99.90th=[36439], 99.95th=[36439], 00:19:04.001 | 99.99th=[36439] 00:19:04.001 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:19:04.001 slat (usec): min=3, max=7011, avg=94.93, stdev=481.86 00:19:04.001 clat (usec): min=5175, max=45143, avg=13263.98, stdev=5296.16 00:19:04.001 lat (usec): min=5189, max=45151, avg=13358.91, stdev=5324.64 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 7701], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10421], 00:19:04.001 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12387], 60.00th=[13042], 00:19:04.001 | 70.00th=[13566], 80.00th=[14353], 90.00th=[15795], 95.00th=[17957], 00:19:04.001 | 99.00th=[41157], 99.50th=[41681], 99.90th=[45351], 99.95th=[45351], 00:19:04.001 | 99.99th=[45351] 00:19:04.001 bw ( KiB/s): min=19272, max=21688, per=25.61%, avg=20480.00, stdev=1708.37, samples=2 00:19:04.001 iops : min= 4818, max= 5422, avg=5120.00, stdev=427.09, samples=2 00:19:04.001 lat (usec) : 750=0.01% 00:19:04.001 lat (msec) : 10=13.56%, 20=82.94%, 50=3.49% 00:19:04.001 cpu : usr=6.27%, sys=8.07%, ctx=520, majf=0, minf=1 00:19:04.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:04.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.001 issued rwts: total=4931,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.001 job1: (groupid=0, jobs=1): err= 0: pid=2018557: Sun Jul 14 03:06:58 2024 00:19:04.001 read: IOPS=5233, BW=20.4MiB/s (21.4MB/s)(20.5MiB/1002msec) 00:19:04.001 slat (usec): min=3, max=3847, avg=89.38, stdev=443.90 00:19:04.001 clat (usec): min=711, max=16140, avg=11668.38, stdev=1347.47 00:19:04.001 lat (usec): min=3533, max=16165, avg=11757.76, stdev=1365.61 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 7242], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[10945], 00:19:04.001 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11994], 00:19:04.001 | 70.00th=[12125], 80.00th=[12518], 90.00th=[13173], 95.00th=[13435], 00:19:04.001 | 99.00th=[14484], 99.50th=[15008], 99.90th=[15795], 99.95th=[16057], 00:19:04.001 | 99.99th=[16188] 00:19:04.001 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:19:04.001 slat (usec): min=3, max=3858, avg=86.20, stdev=432.97 00:19:04.001 clat (usec): min=7302, max=15629, avg=11614.82, stdev=1107.49 00:19:04.001 lat (usec): min=7311, max=16197, avg=11701.02, stdev=1095.98 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10159], 20.00th=[10814], 00:19:04.001 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:19:04.001 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:19:04.001 | 99.00th=[14222], 99.50th=[14484], 99.90th=[15401], 99.95th=[15401], 00:19:04.001 | 99.99th=[15664] 00:19:04.001 bw ( KiB/s): min=22096, max=22928, per=28.15%, avg=22512.00, stdev=588.31, samples=2 00:19:04.001 iops : min= 5524, max= 5732, avg=5628.00, stdev=147.08, samples=2 00:19:04.001 lat (usec) : 750=0.01% 00:19:04.001 lat (msec) : 4=0.20%, 10=7.83%, 20=91.95% 00:19:04.001 cpu : usr=5.49%, sys=9.69%, ctx=540, majf=0, minf=1 00:19:04.001 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:04.001 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.001 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.001 issued rwts: total=5244,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.001 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.001 job2: (groupid=0, jobs=1): err= 0: pid=2018560: Sun Jul 14 03:06:58 2024 00:19:04.001 read: IOPS=4122, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1006msec) 00:19:04.001 slat (usec): min=2, max=10624, avg=113.51, stdev=606.88 00:19:04.001 clat (usec): min=4896, max=25164, avg=14564.70, stdev=2756.49 00:19:04.001 lat (usec): min=5523, max=25180, avg=14678.21, stdev=2770.22 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 7898], 5.00th=[10421], 10.00th=[11600], 20.00th=[12649], 00:19:04.001 | 30.00th=[13173], 40.00th=[13829], 50.00th=[14222], 60.00th=[15008], 00:19:04.001 | 70.00th=[15401], 80.00th=[16450], 90.00th=[18220], 95.00th=[19530], 00:19:04.001 | 99.00th=[23462], 99.50th=[24249], 99.90th=[25035], 99.95th=[25035], 00:19:04.001 | 99.99th=[25035] 00:19:04.001 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:19:04.001 slat (usec): min=3, max=8384, avg=106.72, stdev=609.46 00:19:04.001 clat (usec): min=7167, max=27758, avg=14499.39, stdev=2730.81 00:19:04.001 lat (usec): min=7504, max=27769, avg=14606.10, stdev=2739.62 00:19:04.001 clat percentiles (usec): 00:19:04.001 | 1.00th=[ 8291], 5.00th=[10814], 10.00th=[12125], 20.00th=[12911], 00:19:04.001 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14222], 60.00th=[14353], 00:19:04.001 | 70.00th=[14746], 80.00th=[15401], 90.00th=[17957], 95.00th=[19792], 00:19:04.001 | 99.00th=[24773], 99.50th=[26346], 99.90th=[27657], 99.95th=[27657], 00:19:04.001 | 99.99th=[27657] 00:19:04.002 bw ( KiB/s): min=16696, max=19560, per=22.67%, avg=18128.00, stdev=2025.15, samples=2 00:19:04.002 iops : min= 4174, max= 4890, avg=4532.00, stdev=506.29, samples=2 00:19:04.002 lat (msec) : 10=3.47%, 20=91.99%, 50=4.53% 00:19:04.002 cpu : usr=4.78%, sys=7.26%, ctx=469, majf=0, minf=1 00:19:04.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:04.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.002 issued rwts: total=4147,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.002 job3: (groupid=0, jobs=1): err= 0: pid=2018561: Sun Jul 14 03:06:58 2024 00:19:04.002 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:19:04.002 slat (usec): min=2, max=11336, avg=108.19, stdev=665.49 00:19:04.002 clat (usec): min=7528, max=30939, avg=13914.70, stdev=3098.25 00:19:04.002 lat (usec): min=7541, max=30954, avg=14022.89, stdev=3137.63 00:19:04.002 clat percentiles (usec): 00:19:04.002 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[11076], 20.00th=[12256], 00:19:04.002 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:19:04.002 | 70.00th=[13960], 80.00th=[15270], 90.00th=[18220], 95.00th=[20055], 00:19:04.002 | 99.00th=[25560], 99.50th=[29492], 99.90th=[29492], 99.95th=[29492], 00:19:04.002 | 99.99th=[31065] 00:19:04.002 write: IOPS=4727, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1005msec); 0 zone resets 00:19:04.002 slat (usec): min=3, max=10823, avg=95.49, stdev=526.28 00:19:04.002 clat (usec): min=1318, max=24016, avg=13229.72, stdev=3093.71 00:19:04.002 lat (usec): min=2521, max=27283, avg=13325.21, stdev=3109.77 00:19:04.002 clat percentiles (usec): 00:19:04.002 | 1.00th=[ 4146], 5.00th=[ 7570], 10.00th=[10028], 20.00th=[11863], 00:19:04.002 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:19:04.002 | 70.00th=[13566], 80.00th=[15139], 90.00th=[17171], 95.00th=[19268], 00:19:04.002 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21627], 99.95th=[23200], 00:19:04.002 | 99.99th=[23987] 00:19:04.002 bw ( KiB/s): min=16592, max=20400, per=23.13%, avg=18496.00, stdev=2692.66, samples=2 00:19:04.002 iops : min= 4148, max= 5100, avg=4624.00, stdev=673.17, samples=2 00:19:04.002 lat (msec) : 2=0.01%, 4=0.33%, 10=6.60%, 20=88.77%, 50=4.28% 00:19:04.002 cpu : usr=5.58%, sys=7.47%, ctx=499, majf=0, minf=1 00:19:04.002 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:04.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:04.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:04.002 issued rwts: total=4608,4751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:04.002 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:04.002 00:19:04.002 Run status group 0 (all jobs): 00:19:04.002 READ: bw=73.5MiB/s (77.1MB/s), 16.1MiB/s-20.4MiB/s (16.9MB/s-21.4MB/s), io=73.9MiB (77.5MB), run=1002-1006msec 00:19:04.002 WRITE: bw=78.1MiB/s (81.9MB/s), 17.9MiB/s-22.0MiB/s (18.8MB/s-23.0MB/s), io=78.6MiB (82.4MB), run=1002-1006msec 00:19:04.002 00:19:04.002 Disk stats (read/write): 00:19:04.002 nvme0n1: ios=4466/4608, merge=0/0, ticks=25289/25949, in_queue=51238, util=86.97% 00:19:04.002 nvme0n2: ios=4562/4608, merge=0/0, ticks=17951/16324, in_queue=34275, util=98.27% 00:19:04.002 nvme0n3: ios=3584/3668, merge=0/0, ticks=26687/24196, in_queue=50883, util=88.82% 00:19:04.002 nvme0n4: ios=3888/4096, merge=0/0, ticks=28267/28379, in_queue=56646, util=97.89% 00:19:04.002 03:06:58 -- target/fio.sh@55 -- # sync 00:19:04.002 03:06:58 -- target/fio.sh@59 -- # fio_pid=2018700 00:19:04.002 03:06:58 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:04.002 03:06:58 -- target/fio.sh@61 -- # sleep 3 00:19:04.002 [global] 00:19:04.002 thread=1 00:19:04.002 invalidate=1 00:19:04.002 rw=read 00:19:04.002 time_based=1 00:19:04.002 runtime=10 00:19:04.002 ioengine=libaio 00:19:04.002 direct=1 00:19:04.002 bs=4096 00:19:04.002 iodepth=1 00:19:04.002 norandommap=1 00:19:04.002 numjobs=1 00:19:04.002 00:19:04.002 [job0] 00:19:04.002 filename=/dev/nvme0n1 00:19:04.002 [job1] 00:19:04.002 filename=/dev/nvme0n2 00:19:04.002 [job2] 00:19:04.002 filename=/dev/nvme0n3 00:19:04.002 [job3] 00:19:04.002 filename=/dev/nvme0n4 00:19:04.002 Could not set queue depth (nvme0n1) 00:19:04.002 Could not set queue depth (nvme0n2) 00:19:04.002 Could not set queue depth (nvme0n3) 00:19:04.002 Could not set queue depth (nvme0n4) 00:19:04.002 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.002 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.002 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.002 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:04.002 fio-3.35 00:19:04.002 Starting 4 threads 00:19:07.290 03:07:01 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:07.290 03:07:02 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:07.290 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=1146880, buflen=4096 00:19:07.290 fio: pid=2018915, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.290 03:07:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.290 03:07:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:07.290 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=24428544, buflen=4096 00:19:07.290 fio: pid=2018913, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.548 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=29184000, buflen=4096 00:19:07.548 fio: pid=2018862, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.548 03:07:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.548 03:07:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:07.807 03:07:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.807 03:07:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:07.807 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=21319680, buflen=4096 00:19:07.807 fio: pid=2018877, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:07.807 00:19:07.807 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2018862: Sun Jul 14 03:07:02 2024 00:19:07.807 read: IOPS=2076, BW=8304KiB/s (8503kB/s)(27.8MiB/3432msec) 00:19:07.807 slat (usec): min=4, max=26739, avg=16.40, stdev=345.10 00:19:07.807 clat (usec): min=293, max=42571, avg=459.39, stdev=2077.15 00:19:07.807 lat (usec): min=299, max=42594, avg=475.79, stdev=2106.47 00:19:07.807 clat percentiles (usec): 00:19:07.807 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 322], 20.00th=[ 330], 00:19:07.807 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:19:07.807 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 392], 95.00th=[ 420], 00:19:07.807 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[42206], 99.95th=[42206], 00:19:07.807 | 99.99th=[42730] 00:19:07.807 bw ( KiB/s): min= 104, max=11536, per=40.05%, avg=8073.33, stdev=4537.11, samples=6 00:19:07.807 iops : min= 26, max= 2884, avg=2018.33, stdev=1134.28, samples=6 00:19:07.807 lat (usec) : 500=98.44%, 750=1.23%, 1000=0.03% 00:19:07.807 lat (msec) : 2=0.01%, 4=0.01%, 50=0.25% 00:19:07.807 cpu : usr=1.31%, sys=3.56%, ctx=7129, majf=0, minf=1 00:19:07.807 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 issued rwts: total=7126,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.808 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2018877: Sun Jul 14 03:07:02 2024 00:19:07.808 read: IOPS=1412, BW=5648KiB/s (5784kB/s)(20.3MiB/3686msec) 00:19:07.808 slat (usec): min=4, max=13822, avg=18.46, stdev=191.58 00:19:07.808 clat (usec): min=292, max=42353, avg=681.26, stdev=3256.99 00:19:07.808 lat (usec): min=296, max=56009, avg=699.71, stdev=3296.95 00:19:07.808 clat percentiles (usec): 00:19:07.808 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 334], 20.00th=[ 367], 00:19:07.808 | 30.00th=[ 379], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 429], 00:19:07.808 | 70.00th=[ 457], 80.00th=[ 486], 90.00th=[ 529], 95.00th=[ 553], 00:19:07.808 | 99.00th=[ 627], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:07.808 | 99.99th=[42206] 00:19:07.808 bw ( KiB/s): min= 96, max= 9184, per=29.20%, avg=5885.29, stdev=3555.41, samples=7 00:19:07.808 iops : min= 24, max= 2296, avg=1471.29, stdev=888.89, samples=7 00:19:07.808 lat (usec) : 500=83.48%, 750=15.85%, 1000=0.02% 00:19:07.808 lat (msec) : 50=0.63% 00:19:07.808 cpu : usr=0.81%, sys=2.80%, ctx=5208, majf=0, minf=1 00:19:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 issued rwts: total=5206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.808 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2018913: Sun Jul 14 03:07:02 2024 00:19:07.808 read: IOPS=1876, BW=7504KiB/s (7684kB/s)(23.3MiB/3179msec) 00:19:07.808 slat (nsec): min=4744, max=71412, avg=20160.13, stdev=10674.30 00:19:07.808 clat (usec): min=299, max=42012, avg=504.25, stdev=2052.50 00:19:07.808 lat (usec): min=307, max=42022, avg=524.41, stdev=2052.33 00:19:07.808 clat percentiles (usec): 00:19:07.808 | 1.00th=[ 314], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 355], 00:19:07.808 | 30.00th=[ 371], 40.00th=[ 383], 50.00th=[ 392], 60.00th=[ 400], 00:19:07.808 | 70.00th=[ 416], 80.00th=[ 453], 90.00th=[ 486], 95.00th=[ 515], 00:19:07.808 | 99.00th=[ 570], 99.50th=[ 611], 99.90th=[41681], 99.95th=[42206], 00:19:07.808 | 99.99th=[42206] 00:19:07.808 bw ( KiB/s): min= 4016, max= 9720, per=37.21%, avg=7501.33, stdev=2450.51, samples=6 00:19:07.808 iops : min= 1004, max= 2430, avg=1875.33, stdev=612.63, samples=6 00:19:07.808 lat (usec) : 500=92.99%, 750=6.72% 00:19:07.808 lat (msec) : 2=0.02%, 50=0.25% 00:19:07.808 cpu : usr=1.48%, sys=4.44%, ctx=5966, majf=0, minf=1 00:19:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 issued rwts: total=5965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.808 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2018915: Sun Jul 14 03:07:02 2024 00:19:07.808 read: IOPS=95, BW=383KiB/s (392kB/s)(1120KiB/2928msec) 00:19:07.808 slat (nsec): min=5907, max=52147, avg=25691.15, stdev=9694.80 00:19:07.808 clat (usec): min=324, max=42107, avg=10389.84, stdev=17630.20 00:19:07.808 lat (usec): min=340, max=42120, avg=10415.56, stdev=17629.03 00:19:07.808 clat percentiles (usec): 00:19:07.808 | 1.00th=[ 330], 5.00th=[ 338], 10.00th=[ 355], 20.00th=[ 396], 00:19:07.808 | 30.00th=[ 404], 40.00th=[ 416], 50.00th=[ 437], 60.00th=[ 469], 00:19:07.808 | 70.00th=[ 502], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:19:07.808 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:07.808 | 99.99th=[42206] 00:19:07.808 bw ( KiB/s): min= 96, max= 1760, per=2.13%, avg=430.40, stdev=743.28, samples=5 00:19:07.808 iops : min= 24, max= 440, avg=107.60, stdev=185.82, samples=5 00:19:07.808 lat (usec) : 500=69.04%, 750=6.05%, 1000=0.36% 00:19:07.808 lat (msec) : 50=24.20% 00:19:07.808 cpu : usr=0.10%, sys=0.27%, ctx=281, majf=0, minf=1 00:19:07.808 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.808 issued rwts: total=281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.808 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.808 00:19:07.808 Run status group 0 (all jobs): 00:19:07.808 READ: bw=19.7MiB/s (20.6MB/s), 383KiB/s-8304KiB/s (392kB/s-8503kB/s), io=72.6MiB (76.1MB), run=2928-3686msec 00:19:07.808 00:19:07.808 Disk stats (read/write): 00:19:07.808 nvme0n1: ios=6956/0, merge=0/0, ticks=3451/0, in_queue=3451, util=98.11% 00:19:07.808 nvme0n2: ios=5203/0, merge=0/0, ticks=3396/0, in_queue=3396, util=96.09% 00:19:07.808 nvme0n3: ios=5873/0, merge=0/0, ticks=3050/0, in_queue=3050, util=99.53% 00:19:07.808 nvme0n4: ios=278/0, merge=0/0, ticks=2823/0, in_queue=2823, util=96.71% 00:19:08.067 03:07:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.067 03:07:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:08.325 03:07:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.325 03:07:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:08.583 03:07:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.583 03:07:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:08.841 03:07:03 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:08.841 03:07:03 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:09.099 03:07:04 -- target/fio.sh@69 -- # fio_status=0 00:19:09.099 03:07:04 -- target/fio.sh@70 -- # wait 2018700 00:19:09.099 03:07:04 -- target/fio.sh@70 -- # fio_status=4 00:19:09.099 03:07:04 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:09.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:09.099 03:07:04 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:09.099 03:07:04 -- common/autotest_common.sh@1198 -- # local i=0 00:19:09.099 03:07:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:09.099 03:07:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.099 03:07:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:09.099 03:07:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:09.099 03:07:04 -- common/autotest_common.sh@1210 -- # return 0 00:19:09.099 03:07:04 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:09.099 03:07:04 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:09.099 nvmf hotplug test: fio failed as expected 00:19:09.099 03:07:04 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:09.356 03:07:04 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:09.357 03:07:04 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:09.357 03:07:04 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:09.357 03:07:04 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:09.357 03:07:04 -- target/fio.sh@91 -- # nvmftestfini 00:19:09.357 03:07:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:09.357 03:07:04 -- nvmf/common.sh@116 -- # sync 00:19:09.357 03:07:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:09.357 03:07:04 -- nvmf/common.sh@119 -- # set +e 00:19:09.357 03:07:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:09.357 03:07:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:09.357 rmmod nvme_tcp 00:19:09.357 rmmod nvme_fabrics 00:19:09.357 rmmod nvme_keyring 00:19:09.669 03:07:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:09.669 03:07:04 -- nvmf/common.sh@123 -- # set -e 00:19:09.669 03:07:04 -- nvmf/common.sh@124 -- # return 0 00:19:09.669 03:07:04 -- nvmf/common.sh@477 -- # '[' -n 2016724 ']' 00:19:09.669 03:07:04 -- nvmf/common.sh@478 -- # killprocess 2016724 00:19:09.669 03:07:04 -- common/autotest_common.sh@926 -- # '[' -z 2016724 ']' 00:19:09.669 03:07:04 -- common/autotest_common.sh@930 -- # kill -0 2016724 00:19:09.669 03:07:04 -- common/autotest_common.sh@931 -- # uname 00:19:09.669 03:07:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:09.669 03:07:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2016724 00:19:09.669 03:07:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:09.669 03:07:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:09.669 03:07:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2016724' 00:19:09.669 killing process with pid 2016724 00:19:09.669 03:07:04 -- common/autotest_common.sh@945 -- # kill 2016724 00:19:09.669 03:07:04 -- common/autotest_common.sh@950 -- # wait 2016724 00:19:09.934 03:07:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:09.934 03:07:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:09.934 03:07:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:09.934 03:07:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:09.934 03:07:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:09.934 03:07:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:09.934 03:07:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:09.934 03:07:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.839 03:07:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:11.839 00:19:11.839 real 0m23.650s 00:19:11.839 user 1m22.662s 00:19:11.839 sys 0m6.745s 00:19:11.839 03:07:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.839 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 ************************************ 00:19:11.839 END TEST nvmf_fio_target 00:19:11.839 ************************************ 00:19:11.839 03:07:06 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.839 03:07:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:11.839 03:07:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:11.839 03:07:06 -- common/autotest_common.sh@10 -- # set +x 00:19:11.839 ************************************ 00:19:11.839 START TEST nvmf_bdevio 00:19:11.839 ************************************ 00:19:11.839 03:07:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:11.839 * Looking for test storage... 00:19:11.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:11.839 03:07:06 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:11.839 03:07:06 -- nvmf/common.sh@7 -- # uname -s 00:19:11.839 03:07:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:11.839 03:07:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:11.839 03:07:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:11.839 03:07:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:11.839 03:07:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:11.839 03:07:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:11.839 03:07:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:11.839 03:07:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:11.839 03:07:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:11.839 03:07:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:11.839 03:07:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.839 03:07:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:11.839 03:07:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:11.839 03:07:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:11.839 03:07:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:11.839 03:07:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:11.839 03:07:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:11.839 03:07:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:11.839 03:07:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:11.839 03:07:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.839 03:07:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.839 03:07:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.839 03:07:07 -- paths/export.sh@5 -- # export PATH 00:19:11.839 03:07:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:11.839 03:07:07 -- nvmf/common.sh@46 -- # : 0 00:19:11.839 03:07:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:11.839 03:07:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:11.839 03:07:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:11.839 03:07:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:11.839 03:07:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:11.839 03:07:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:11.839 03:07:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:11.839 03:07:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:11.839 03:07:07 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:11.839 03:07:07 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:11.839 03:07:07 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:11.839 03:07:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:11.839 03:07:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:11.839 03:07:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:11.839 03:07:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:11.839 03:07:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:11.839 03:07:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:11.839 03:07:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:11.839 03:07:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.839 03:07:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:11.839 03:07:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:11.839 03:07:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:11.839 03:07:07 -- common/autotest_common.sh@10 -- # set +x 00:19:13.747 03:07:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.747 03:07:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.747 03:07:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.747 03:07:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.747 03:07:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.747 03:07:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.747 03:07:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.747 03:07:08 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.747 03:07:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.747 03:07:08 -- nvmf/common.sh@295 -- # e810=() 00:19:13.747 03:07:08 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.747 03:07:08 -- nvmf/common.sh@296 -- # x722=() 00:19:13.747 03:07:08 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.747 03:07:08 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.747 03:07:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.747 03:07:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.747 03:07:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.747 03:07:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:13.747 03:07:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.747 03:07:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:13.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:13.747 03:07:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.747 03:07:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:13.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:13.747 03:07:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.747 03:07:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.747 03:07:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.747 03:07:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:13.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:13.747 03:07:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.747 03:07:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.747 03:07:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.747 03:07:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.747 03:07:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:13.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:13.747 03:07:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.747 03:07:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.747 03:07:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:13.747 03:07:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:13.747 03:07:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:13.747 03:07:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:13.747 03:07:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.747 03:07:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:13.747 03:07:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:13.747 03:07:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:13.747 03:07:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:13.747 03:07:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:13.747 03:07:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:13.747 03:07:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:13.747 03:07:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:13.747 03:07:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:13.747 03:07:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:14.007 03:07:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:14.007 03:07:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:14.007 03:07:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:14.007 03:07:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:14.007 03:07:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:14.007 03:07:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:14.007 03:07:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:14.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:14.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:19:14.007 00:19:14.007 --- 10.0.0.2 ping statistics --- 00:19:14.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.007 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:14.007 03:07:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:14.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:14.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:19:14.007 00:19:14.007 --- 10.0.0.1 ping statistics --- 00:19:14.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:14.007 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:14.007 03:07:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:14.007 03:07:09 -- nvmf/common.sh@410 -- # return 0 00:19:14.007 03:07:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:14.007 03:07:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:14.007 03:07:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:14.007 03:07:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:14.007 03:07:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:14.007 03:07:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:14.007 03:07:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:14.007 03:07:09 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:14.007 03:07:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:14.007 03:07:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:14.007 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:19:14.007 03:07:09 -- nvmf/common.sh@469 -- # nvmfpid=2021439 00:19:14.007 03:07:09 -- nvmf/common.sh@470 -- # waitforlisten 2021439 00:19:14.007 03:07:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:14.007 03:07:09 -- common/autotest_common.sh@819 -- # '[' -z 2021439 ']' 00:19:14.007 03:07:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.007 03:07:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:14.007 03:07:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.007 03:07:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:14.007 03:07:09 -- common/autotest_common.sh@10 -- # set +x 00:19:14.007 [2024-07-14 03:07:09.154100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:14.008 [2024-07-14 03:07:09.154191] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.008 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.008 [2024-07-14 03:07:09.229084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:14.266 [2024-07-14 03:07:09.325136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:14.266 [2024-07-14 03:07:09.325296] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.266 [2024-07-14 03:07:09.325314] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.266 [2024-07-14 03:07:09.325327] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.266 [2024-07-14 03:07:09.325428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:14.266 [2024-07-14 03:07:09.325486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:14.266 [2024-07-14 03:07:09.325543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:14.267 [2024-07-14 03:07:09.325546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:15.202 03:07:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:15.202 03:07:10 -- common/autotest_common.sh@852 -- # return 0 00:19:15.202 03:07:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:15.202 03:07:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 03:07:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.202 03:07:10 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:15.202 03:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 [2024-07-14 03:07:10.186573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.202 03:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.202 03:07:10 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:15.202 03:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 Malloc0 00:19:15.202 03:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.202 03:07:10 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:15.202 03:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 03:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.202 03:07:10 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:15.202 03:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 03:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.202 03:07:10 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:15.202 03:07:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:15.202 03:07:10 -- common/autotest_common.sh@10 -- # set +x 00:19:15.202 [2024-07-14 03:07:10.239838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:15.202 03:07:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:15.202 03:07:10 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:15.202 03:07:10 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:15.202 03:07:10 -- nvmf/common.sh@520 -- # config=() 00:19:15.202 03:07:10 -- nvmf/common.sh@520 -- # local subsystem config 00:19:15.202 03:07:10 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:15.202 03:07:10 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:15.202 { 00:19:15.202 "params": { 00:19:15.202 "name": "Nvme$subsystem", 00:19:15.202 "trtype": "$TEST_TRANSPORT", 00:19:15.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:15.202 "adrfam": "ipv4", 00:19:15.202 "trsvcid": "$NVMF_PORT", 00:19:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:15.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:15.202 "hdgst": ${hdgst:-false}, 00:19:15.202 "ddgst": ${ddgst:-false} 00:19:15.202 }, 00:19:15.202 "method": "bdev_nvme_attach_controller" 00:19:15.202 } 00:19:15.202 EOF 00:19:15.202 )") 00:19:15.202 03:07:10 -- nvmf/common.sh@542 -- # cat 00:19:15.202 03:07:10 -- nvmf/common.sh@544 -- # jq . 00:19:15.202 03:07:10 -- nvmf/common.sh@545 -- # IFS=, 00:19:15.202 03:07:10 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:15.202 "params": { 00:19:15.202 "name": "Nvme1", 00:19:15.202 "trtype": "tcp", 00:19:15.202 "traddr": "10.0.0.2", 00:19:15.202 "adrfam": "ipv4", 00:19:15.202 "trsvcid": "4420", 00:19:15.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:15.202 "hdgst": false, 00:19:15.202 "ddgst": false 00:19:15.202 }, 00:19:15.202 "method": "bdev_nvme_attach_controller" 00:19:15.202 }' 00:19:15.202 [2024-07-14 03:07:10.284225] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:15.202 [2024-07-14 03:07:10.284293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2021600 ] 00:19:15.202 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.202 [2024-07-14 03:07:10.344564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:15.202 [2024-07-14 03:07:10.432793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.202 [2024-07-14 03:07:10.432843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.202 [2024-07-14 03:07:10.432846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.462 [2024-07-14 03:07:10.602257] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:15.462 [2024-07-14 03:07:10.602316] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:15.462 I/O targets: 00:19:15.462 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:15.462 00:19:15.462 00:19:15.462 CUnit - A unit testing framework for C - Version 2.1-3 00:19:15.462 http://cunit.sourceforge.net/ 00:19:15.462 00:19:15.462 00:19:15.462 Suite: bdevio tests on: Nvme1n1 00:19:15.462 Test: blockdev write read block ...passed 00:19:15.462 Test: blockdev write zeroes read block ...passed 00:19:15.462 Test: blockdev write zeroes read no split ...passed 00:19:15.722 Test: blockdev write zeroes read split ...passed 00:19:15.722 Test: blockdev write zeroes read split partial ...passed 00:19:15.722 Test: blockdev reset ...[2024-07-14 03:07:10.812764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:15.722 [2024-07-14 03:07:10.812877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2361860 (9): Bad file descriptor 00:19:15.722 [2024-07-14 03:07:10.831756] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:15.722 passed 00:19:15.722 Test: blockdev write read 8 blocks ...passed 00:19:15.722 Test: blockdev write read size > 128k ...passed 00:19:15.722 Test: blockdev write read invalid size ...passed 00:19:15.722 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:15.722 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:15.722 Test: blockdev write read max offset ...passed 00:19:15.981 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:15.981 Test: blockdev writev readv 8 blocks ...passed 00:19:15.981 Test: blockdev writev readv 30 x 1block ...passed 00:19:15.981 Test: blockdev writev readv block ...passed 00:19:15.981 Test: blockdev writev readv size > 128k ...passed 00:19:15.981 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:15.981 Test: blockdev comparev and writev ...[2024-07-14 03:07:11.128372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.128406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.128430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.128447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.128824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.128848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.128877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.128895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.129261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.129284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.129305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.129321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.129671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.129695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.129716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:15.981 [2024-07-14 03:07:11.129732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:15.981 passed 00:19:15.981 Test: blockdev nvme passthru rw ...passed 00:19:15.981 Test: blockdev nvme passthru vendor specific ...[2024-07-14 03:07:11.212216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.981 [2024-07-14 03:07:11.212242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.212440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.981 [2024-07-14 03:07:11.212464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.212659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.981 [2024-07-14 03:07:11.212687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:15.981 [2024-07-14 03:07:11.212891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:15.981 [2024-07-14 03:07:11.212915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:15.981 passed 00:19:15.981 Test: blockdev nvme admin passthru ...passed 00:19:16.238 Test: blockdev copy ...passed 00:19:16.238 00:19:16.238 Run Summary: Type Total Ran Passed Failed Inactive 00:19:16.238 suites 1 1 n/a 0 0 00:19:16.238 tests 23 23 23 0 0 00:19:16.238 asserts 152 152 152 0 n/a 00:19:16.238 00:19:16.238 Elapsed time = 1.324 seconds 00:19:16.238 03:07:11 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.238 03:07:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:16.238 03:07:11 -- common/autotest_common.sh@10 -- # set +x 00:19:16.238 03:07:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:16.238 03:07:11 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:16.238 03:07:11 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:16.238 03:07:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.238 03:07:11 -- nvmf/common.sh@116 -- # sync 00:19:16.238 03:07:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:16.238 03:07:11 -- nvmf/common.sh@119 -- # set +e 00:19:16.238 03:07:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.238 03:07:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:16.238 rmmod nvme_tcp 00:19:16.238 rmmod nvme_fabrics 00:19:16.238 rmmod nvme_keyring 00:19:16.498 03:07:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.498 03:07:11 -- nvmf/common.sh@123 -- # set -e 00:19:16.498 03:07:11 -- nvmf/common.sh@124 -- # return 0 00:19:16.498 03:07:11 -- nvmf/common.sh@477 -- # '[' -n 2021439 ']' 00:19:16.498 03:07:11 -- nvmf/common.sh@478 -- # killprocess 2021439 00:19:16.498 03:07:11 -- common/autotest_common.sh@926 -- # '[' -z 2021439 ']' 00:19:16.498 03:07:11 -- common/autotest_common.sh@930 -- # kill -0 2021439 00:19:16.498 03:07:11 -- common/autotest_common.sh@931 -- # uname 00:19:16.498 03:07:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:16.498 03:07:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2021439 00:19:16.498 03:07:11 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:16.498 03:07:11 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:16.498 03:07:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2021439' 00:19:16.498 killing process with pid 2021439 00:19:16.498 03:07:11 -- common/autotest_common.sh@945 -- # kill 2021439 00:19:16.498 03:07:11 -- common/autotest_common.sh@950 -- # wait 2021439 00:19:16.759 03:07:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.759 03:07:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:16.759 03:07:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:16.759 03:07:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:16.759 03:07:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:16.759 03:07:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.759 03:07:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.759 03:07:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.659 03:07:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:18.659 00:19:18.659 real 0m6.871s 00:19:18.659 user 0m12.808s 00:19:18.659 sys 0m2.041s 00:19:18.659 03:07:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.659 03:07:13 -- common/autotest_common.sh@10 -- # set +x 00:19:18.659 ************************************ 00:19:18.659 END TEST nvmf_bdevio 00:19:18.659 ************************************ 00:19:18.659 03:07:13 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:18.659 03:07:13 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:18.659 03:07:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:18.659 03:07:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:18.659 03:07:13 -- common/autotest_common.sh@10 -- # set +x 00:19:18.659 ************************************ 00:19:18.659 START TEST nvmf_bdevio_no_huge 00:19:18.659 ************************************ 00:19:18.659 03:07:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:18.659 * Looking for test storage... 00:19:18.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:18.659 03:07:13 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:18.659 03:07:13 -- nvmf/common.sh@7 -- # uname -s 00:19:18.659 03:07:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:18.660 03:07:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:18.660 03:07:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:18.660 03:07:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:18.660 03:07:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:18.660 03:07:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:18.660 03:07:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:18.660 03:07:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:18.660 03:07:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:18.660 03:07:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:18.660 03:07:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.660 03:07:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:18.660 03:07:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:18.660 03:07:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:18.660 03:07:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:18.660 03:07:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:18.660 03:07:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.660 03:07:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.660 03:07:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.660 03:07:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.660 03:07:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.660 03:07:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.660 03:07:13 -- paths/export.sh@5 -- # export PATH 00:19:18.660 03:07:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:18.660 03:07:13 -- nvmf/common.sh@46 -- # : 0 00:19:18.660 03:07:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:18.660 03:07:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:18.660 03:07:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:18.660 03:07:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:18.660 03:07:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:18.660 03:07:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:18.660 03:07:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:18.660 03:07:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:18.660 03:07:13 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:18.660 03:07:13 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:18.660 03:07:13 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:18.660 03:07:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:18.660 03:07:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:18.660 03:07:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:18.660 03:07:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:18.660 03:07:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:18.660 03:07:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.660 03:07:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:18.660 03:07:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:18.660 03:07:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:18.660 03:07:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:18.660 03:07:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:18.660 03:07:13 -- common/autotest_common.sh@10 -- # set +x 00:19:21.190 03:07:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:21.190 03:07:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:21.190 03:07:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:21.190 03:07:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:21.190 03:07:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:21.190 03:07:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:21.190 03:07:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:21.190 03:07:15 -- nvmf/common.sh@294 -- # net_devs=() 00:19:21.190 03:07:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:21.190 03:07:15 -- nvmf/common.sh@295 -- # e810=() 00:19:21.190 03:07:15 -- nvmf/common.sh@295 -- # local -ga e810 00:19:21.190 03:07:15 -- nvmf/common.sh@296 -- # x722=() 00:19:21.190 03:07:15 -- nvmf/common.sh@296 -- # local -ga x722 00:19:21.190 03:07:15 -- nvmf/common.sh@297 -- # mlx=() 00:19:21.190 03:07:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:21.190 03:07:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:21.190 03:07:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.190 03:07:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:21.190 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:21.190 03:07:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:21.190 03:07:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:21.190 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:21.190 03:07:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.190 03:07:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.190 03:07:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.190 03:07:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:21.190 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:21.190 03:07:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:21.190 03:07:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:21.190 03:07:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:21.190 03:07:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:21.190 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:21.190 03:07:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:21.190 03:07:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:21.190 03:07:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:21.190 03:07:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:21.190 03:07:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:21.190 03:07:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:21.190 03:07:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:21.190 03:07:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:21.190 03:07:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:21.190 03:07:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:21.190 03:07:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:21.190 03:07:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:21.190 03:07:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:21.190 03:07:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:21.190 03:07:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:21.190 03:07:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:21.190 03:07:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:21.190 03:07:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:21.190 03:07:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:21.190 03:07:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:21.190 03:07:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:21.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:21.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:19:21.190 00:19:21.190 --- 10.0.0.2 ping statistics --- 00:19:21.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.190 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:19:21.190 03:07:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:21.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:21.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:19:21.190 00:19:21.190 --- 10.0.0.1 ping statistics --- 00:19:21.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:21.190 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:21.190 03:07:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:21.190 03:07:15 -- nvmf/common.sh@410 -- # return 0 00:19:21.190 03:07:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:21.190 03:07:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:21.190 03:07:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:21.190 03:07:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:21.190 03:07:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:21.190 03:07:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:21.190 03:07:16 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:21.190 03:07:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:21.190 03:07:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:21.190 03:07:16 -- common/autotest_common.sh@10 -- # set +x 00:19:21.190 03:07:16 -- nvmf/common.sh@469 -- # nvmfpid=2023685 00:19:21.190 03:07:16 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:21.190 03:07:16 -- nvmf/common.sh@470 -- # waitforlisten 2023685 00:19:21.190 03:07:16 -- common/autotest_common.sh@819 -- # '[' -z 2023685 ']' 00:19:21.190 03:07:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:21.190 03:07:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:21.190 03:07:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:21.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:21.190 03:07:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:21.190 03:07:16 -- common/autotest_common.sh@10 -- # set +x 00:19:21.190 [2024-07-14 03:07:16.058658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:21.190 [2024-07-14 03:07:16.058745] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:21.190 [2024-07-14 03:07:16.131092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:21.190 [2024-07-14 03:07:16.210628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:21.190 [2024-07-14 03:07:16.210787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:21.190 [2024-07-14 03:07:16.210806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:21.190 [2024-07-14 03:07:16.210820] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:21.190 [2024-07-14 03:07:16.210919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:21.190 [2024-07-14 03:07:16.210973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:21.190 [2024-07-14 03:07:16.211025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:21.191 [2024-07-14 03:07:16.211028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.128 03:07:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:22.128 03:07:17 -- common/autotest_common.sh@852 -- # return 0 00:19:22.128 03:07:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:22.128 03:07:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 03:07:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.128 03:07:17 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:22.128 03:07:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 [2024-07-14 03:07:17.068136] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.128 03:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.128 03:07:17 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:22.128 03:07:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 Malloc0 00:19:22.128 03:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.128 03:07:17 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:22.128 03:07:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 03:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.128 03:07:17 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:22.128 03:07:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 03:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.128 03:07:17 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.128 03:07:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:22.128 03:07:17 -- common/autotest_common.sh@10 -- # set +x 00:19:22.128 [2024-07-14 03:07:17.106455] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.128 03:07:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:22.128 03:07:17 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:22.128 03:07:17 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:22.128 03:07:17 -- nvmf/common.sh@520 -- # config=() 00:19:22.128 03:07:17 -- nvmf/common.sh@520 -- # local subsystem config 00:19:22.128 03:07:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:22.128 03:07:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:22.128 { 00:19:22.128 "params": { 00:19:22.128 "name": "Nvme$subsystem", 00:19:22.128 "trtype": "$TEST_TRANSPORT", 00:19:22.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:22.128 "adrfam": "ipv4", 00:19:22.128 "trsvcid": "$NVMF_PORT", 00:19:22.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:22.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:22.128 "hdgst": ${hdgst:-false}, 00:19:22.128 "ddgst": ${ddgst:-false} 00:19:22.128 }, 00:19:22.128 "method": "bdev_nvme_attach_controller" 00:19:22.128 } 00:19:22.128 EOF 00:19:22.128 )") 00:19:22.128 03:07:17 -- nvmf/common.sh@542 -- # cat 00:19:22.128 03:07:17 -- nvmf/common.sh@544 -- # jq . 00:19:22.128 03:07:17 -- nvmf/common.sh@545 -- # IFS=, 00:19:22.128 03:07:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:22.128 "params": { 00:19:22.129 "name": "Nvme1", 00:19:22.129 "trtype": "tcp", 00:19:22.129 "traddr": "10.0.0.2", 00:19:22.129 "adrfam": "ipv4", 00:19:22.129 "trsvcid": "4420", 00:19:22.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:22.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:22.129 "hdgst": false, 00:19:22.129 "ddgst": false 00:19:22.129 }, 00:19:22.129 "method": "bdev_nvme_attach_controller" 00:19:22.129 }' 00:19:22.129 [2024-07-14 03:07:17.152073] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:22.129 [2024-07-14 03:07:17.152148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2023841 ] 00:19:22.129 [2024-07-14 03:07:17.216356] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.129 [2024-07-14 03:07:17.299843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.129 [2024-07-14 03:07:17.299892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.129 [2024-07-14 03:07:17.299897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.387 [2024-07-14 03:07:17.532364] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:22.387 [2024-07-14 03:07:17.532417] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:22.387 I/O targets: 00:19:22.387 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:22.387 00:19:22.387 00:19:22.387 CUnit - A unit testing framework for C - Version 2.1-3 00:19:22.387 http://cunit.sourceforge.net/ 00:19:22.387 00:19:22.387 00:19:22.387 Suite: bdevio tests on: Nvme1n1 00:19:22.387 Test: blockdev write read block ...passed 00:19:22.387 Test: blockdev write zeroes read block ...passed 00:19:22.387 Test: blockdev write zeroes read no split ...passed 00:19:22.644 Test: blockdev write zeroes read split ...passed 00:19:22.644 Test: blockdev write zeroes read split partial ...passed 00:19:22.644 Test: blockdev reset ...[2024-07-14 03:07:17.750332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.644 [2024-07-14 03:07:17.750440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e7ef0 (9): Bad file descriptor 00:19:22.644 [2024-07-14 03:07:17.770793] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.644 passed 00:19:22.644 Test: blockdev write read 8 blocks ...passed 00:19:22.644 Test: blockdev write read size > 128k ...passed 00:19:22.644 Test: blockdev write read invalid size ...passed 00:19:22.644 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:22.644 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:22.644 Test: blockdev write read max offset ...passed 00:19:22.902 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:22.902 Test: blockdev writev readv 8 blocks ...passed 00:19:22.902 Test: blockdev writev readv 30 x 1block ...passed 00:19:22.902 Test: blockdev writev readv block ...passed 00:19:22.902 Test: blockdev writev readv size > 128k ...passed 00:19:22.902 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:22.902 Test: blockdev comparev and writev ...[2024-07-14 03:07:18.028166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.028199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.028223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.028239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.028651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.028676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.028697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.028712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.029133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.029157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.029179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.029194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.029601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.029624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.029645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:22.902 [2024-07-14 03:07:18.029661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:22.902 passed 00:19:22.902 Test: blockdev nvme passthru rw ...passed 00:19:22.902 Test: blockdev nvme passthru vendor specific ...[2024-07-14 03:07:18.112270] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.902 [2024-07-14 03:07:18.112297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.112499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.902 [2024-07-14 03:07:18.112521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.112715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.902 [2024-07-14 03:07:18.112737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:22.902 [2024-07-14 03:07:18.112942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:22.902 [2024-07-14 03:07:18.112965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:22.902 passed 00:19:22.902 Test: blockdev nvme admin passthru ...passed 00:19:23.160 Test: blockdev copy ...passed 00:19:23.160 00:19:23.160 Run Summary: Type Total Ran Passed Failed Inactive 00:19:23.160 suites 1 1 n/a 0 0 00:19:23.160 tests 23 23 23 0 0 00:19:23.160 asserts 152 152 152 0 n/a 00:19:23.161 00:19:23.161 Elapsed time = 1.276 seconds 00:19:23.419 03:07:18 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:23.419 03:07:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:23.419 03:07:18 -- common/autotest_common.sh@10 -- # set +x 00:19:23.419 03:07:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:23.419 03:07:18 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:23.419 03:07:18 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:23.419 03:07:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:23.419 03:07:18 -- nvmf/common.sh@116 -- # sync 00:19:23.419 03:07:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:23.419 03:07:18 -- nvmf/common.sh@119 -- # set +e 00:19:23.419 03:07:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:23.419 03:07:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:23.419 rmmod nvme_tcp 00:19:23.419 rmmod nvme_fabrics 00:19:23.419 rmmod nvme_keyring 00:19:23.419 03:07:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:23.419 03:07:18 -- nvmf/common.sh@123 -- # set -e 00:19:23.419 03:07:18 -- nvmf/common.sh@124 -- # return 0 00:19:23.419 03:07:18 -- nvmf/common.sh@477 -- # '[' -n 2023685 ']' 00:19:23.419 03:07:18 -- nvmf/common.sh@478 -- # killprocess 2023685 00:19:23.419 03:07:18 -- common/autotest_common.sh@926 -- # '[' -z 2023685 ']' 00:19:23.419 03:07:18 -- common/autotest_common.sh@930 -- # kill -0 2023685 00:19:23.419 03:07:18 -- common/autotest_common.sh@931 -- # uname 00:19:23.419 03:07:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:23.419 03:07:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2023685 00:19:23.419 03:07:18 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:23.419 03:07:18 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:23.419 03:07:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2023685' 00:19:23.419 killing process with pid 2023685 00:19:23.419 03:07:18 -- common/autotest_common.sh@945 -- # kill 2023685 00:19:23.419 03:07:18 -- common/autotest_common.sh@950 -- # wait 2023685 00:19:23.985 03:07:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:23.985 03:07:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:23.985 03:07:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:23.985 03:07:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:23.985 03:07:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:23.985 03:07:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.985 03:07:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.985 03:07:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.938 03:07:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:25.938 00:19:25.938 real 0m7.190s 00:19:25.938 user 0m13.962s 00:19:25.938 sys 0m2.540s 00:19:25.938 03:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:25.938 03:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:25.938 ************************************ 00:19:25.938 END TEST nvmf_bdevio_no_huge 00:19:25.938 ************************************ 00:19:25.938 03:07:21 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.938 03:07:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:25.938 03:07:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:25.938 03:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:25.938 ************************************ 00:19:25.938 START TEST nvmf_tls 00:19:25.938 ************************************ 00:19:25.938 03:07:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:25.938 * Looking for test storage... 00:19:25.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:25.938 03:07:21 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:25.938 03:07:21 -- nvmf/common.sh@7 -- # uname -s 00:19:25.938 03:07:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:25.938 03:07:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:25.938 03:07:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:25.938 03:07:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:25.938 03:07:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:25.938 03:07:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:25.938 03:07:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:25.938 03:07:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:25.938 03:07:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:25.938 03:07:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:25.938 03:07:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.938 03:07:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.938 03:07:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:25.938 03:07:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:25.938 03:07:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:25.938 03:07:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:25.938 03:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:25.938 03:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:25.938 03:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:25.938 03:07:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.938 03:07:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.938 03:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.938 03:07:21 -- paths/export.sh@5 -- # export PATH 00:19:25.938 03:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:25.938 03:07:21 -- nvmf/common.sh@46 -- # : 0 00:19:25.938 03:07:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:25.938 03:07:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:25.938 03:07:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:25.938 03:07:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:25.939 03:07:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:25.939 03:07:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:25.939 03:07:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:25.939 03:07:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:25.939 03:07:21 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:25.939 03:07:21 -- target/tls.sh@71 -- # nvmftestinit 00:19:25.939 03:07:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:25.939 03:07:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:25.939 03:07:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:25.939 03:07:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:25.939 03:07:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:25.939 03:07:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.939 03:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:25.939 03:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:25.939 03:07:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:25.939 03:07:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:25.939 03:07:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:25.939 03:07:21 -- common/autotest_common.sh@10 -- # set +x 00:19:28.470 03:07:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.470 03:07:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.470 03:07:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.471 03:07:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.471 03:07:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.471 03:07:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.471 03:07:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.471 03:07:23 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.471 03:07:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.471 03:07:23 -- nvmf/common.sh@295 -- # e810=() 00:19:28.471 03:07:23 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.471 03:07:23 -- nvmf/common.sh@296 -- # x722=() 00:19:28.471 03:07:23 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.471 03:07:23 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.471 03:07:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.471 03:07:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.471 03:07:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.471 03:07:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:28.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:28.471 03:07:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.471 03:07:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:28.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:28.471 03:07:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.471 03:07:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.471 03:07:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.471 03:07:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:28.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:28.471 03:07:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.471 03:07:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.471 03:07:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.471 03:07:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:28.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:28.471 03:07:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.471 03:07:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:28.471 03:07:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.471 03:07:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.471 03:07:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:28.471 03:07:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.471 03:07:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.471 03:07:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:28.471 03:07:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.471 03:07:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.471 03:07:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:28.471 03:07:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:28.471 03:07:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.471 03:07:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.471 03:07:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.471 03:07:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.471 03:07:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:28.471 03:07:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.471 03:07:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.471 03:07:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.471 03:07:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:28.471 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.471 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:19:28.471 00:19:28.471 --- 10.0.0.2 ping statistics --- 00:19:28.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.471 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:19:28.471 03:07:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.471 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.471 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:19:28.471 00:19:28.471 --- 10.0.0.1 ping statistics --- 00:19:28.471 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.471 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:19:28.471 03:07:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.471 03:07:23 -- nvmf/common.sh@410 -- # return 0 00:19:28.471 03:07:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.471 03:07:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.471 03:07:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:28.471 03:07:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.471 03:07:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:28.471 03:07:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:28.471 03:07:23 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:28.471 03:07:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.471 03:07:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:28.471 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 03:07:23 -- nvmf/common.sh@469 -- # nvmfpid=2026053 00:19:28.471 03:07:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:28.471 03:07:23 -- nvmf/common.sh@470 -- # waitforlisten 2026053 00:19:28.471 03:07:23 -- common/autotest_common.sh@819 -- # '[' -z 2026053 ']' 00:19:28.471 03:07:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.471 03:07:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.471 03:07:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.471 03:07:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.471 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 [2024-07-14 03:07:23.337584] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:28.471 [2024-07-14 03:07:23.337675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.471 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.471 [2024-07-14 03:07:23.415550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.471 [2024-07-14 03:07:23.504268] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:28.471 [2024-07-14 03:07:23.504429] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.471 [2024-07-14 03:07:23.504448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.471 [2024-07-14 03:07:23.504462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.471 [2024-07-14 03:07:23.504501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.471 03:07:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:28.471 03:07:23 -- common/autotest_common.sh@852 -- # return 0 00:19:28.471 03:07:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:28.471 03:07:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:28.471 03:07:23 -- common/autotest_common.sh@10 -- # set +x 00:19:28.471 03:07:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.471 03:07:23 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:28.471 03:07:23 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:28.728 true 00:19:28.728 03:07:23 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:28.728 03:07:23 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:28.986 03:07:24 -- target/tls.sh@82 -- # version=0 00:19:28.986 03:07:24 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:28.986 03:07:24 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:29.244 03:07:24 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.244 03:07:24 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:29.502 03:07:24 -- target/tls.sh@90 -- # version=13 00:19:29.502 03:07:24 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:29.502 03:07:24 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:29.760 03:07:24 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:29.760 03:07:24 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:30.017 03:07:25 -- target/tls.sh@98 -- # version=7 00:19:30.017 03:07:25 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:30.017 03:07:25 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.017 03:07:25 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:30.274 03:07:25 -- target/tls.sh@105 -- # ktls=false 00:19:30.274 03:07:25 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:30.274 03:07:25 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:30.531 03:07:25 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.531 03:07:25 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:30.789 03:07:25 -- target/tls.sh@113 -- # ktls=true 00:19:30.789 03:07:25 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:30.789 03:07:25 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:30.789 03:07:26 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:30.789 03:07:26 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:31.047 03:07:26 -- target/tls.sh@121 -- # ktls=false 00:19:31.047 03:07:26 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:31.047 03:07:26 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:31.047 03:07:26 -- target/tls.sh@49 -- # local key hash crc 00:19:31.047 03:07:26 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:31.047 03:07:26 -- target/tls.sh@51 -- # hash=01 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # gzip -1 -c 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # tail -c8 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # head -c 4 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # crc='p$H�' 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.047 03:07:26 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.047 03:07:26 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:31.047 03:07:26 -- target/tls.sh@49 -- # local key hash crc 00:19:31.047 03:07:26 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:31.047 03:07:26 -- target/tls.sh@51 -- # hash=01 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # gzip -1 -c 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # tail -c8 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # head -c 4 00:19:31.047 03:07:26 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:31.047 03:07:26 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.047 03:07:26 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.047 03:07:26 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:31.047 03:07:26 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:31.047 03:07:26 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:31.047 03:07:26 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:31.047 03:07:26 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:31.047 03:07:26 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:31.047 03:07:26 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:31.305 03:07:26 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:31.872 03:07:26 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:31.872 03:07:26 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:31.872 03:07:26 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.872 [2024-07-14 03:07:27.064501] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.872 03:07:27 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:32.131 03:07:27 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:32.389 [2024-07-14 03:07:27.545833] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.389 [2024-07-14 03:07:27.546094] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.389 03:07:27 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.647 malloc0 00:19:32.647 03:07:27 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.906 03:07:28 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:33.163 03:07:28 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:33.163 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.130 Initializing NVMe Controllers 00:19:43.130 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:43.130 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:43.130 Initialization complete. Launching workers. 00:19:43.130 ======================================================== 00:19:43.130 Latency(us) 00:19:43.130 Device Information : IOPS MiB/s Average min max 00:19:43.130 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7745.25 30.25 8265.73 1384.16 9292.27 00:19:43.130 ======================================================== 00:19:43.130 Total : 7745.25 30.25 8265.73 1384.16 9292.27 00:19:43.130 00:19:43.130 03:07:38 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:43.130 03:07:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:43.130 03:07:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:43.130 03:07:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:43.130 03:07:38 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:43.130 03:07:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:43.130 03:07:38 -- target/tls.sh@28 -- # bdevperf_pid=2027887 00:19:43.130 03:07:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:43.130 03:07:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:43.130 03:07:38 -- target/tls.sh@31 -- # waitforlisten 2027887 /var/tmp/bdevperf.sock 00:19:43.130 03:07:38 -- common/autotest_common.sh@819 -- # '[' -z 2027887 ']' 00:19:43.130 03:07:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:43.130 03:07:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.130 03:07:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:43.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:43.130 03:07:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.130 03:07:38 -- common/autotest_common.sh@10 -- # set +x 00:19:43.388 [2024-07-14 03:07:38.414010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:43.388 [2024-07-14 03:07:38.414091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2027887 ] 00:19:43.388 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.388 [2024-07-14 03:07:38.473149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.388 [2024-07-14 03:07:38.556964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.320 03:07:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.320 03:07:39 -- common/autotest_common.sh@852 -- # return 0 00:19:44.320 03:07:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:44.577 [2024-07-14 03:07:39.590071] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:44.577 TLSTESTn1 00:19:44.577 03:07:39 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:44.577 Running I/O for 10 seconds... 00:19:56.803 00:19:56.803 Latency(us) 00:19:56.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.803 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.803 Verification LBA range: start 0x0 length 0x2000 00:19:56.803 TLSTESTn1 : 10.04 1861.77 7.27 0.00 0.00 68647.53 8204.14 87381.33 00:19:56.803 =================================================================================================================== 00:19:56.803 Total : 1861.77 7.27 0.00 0.00 68647.53 8204.14 87381.33 00:19:56.803 0 00:19:56.803 03:07:49 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:56.803 03:07:49 -- target/tls.sh@45 -- # killprocess 2027887 00:19:56.803 03:07:49 -- common/autotest_common.sh@926 -- # '[' -z 2027887 ']' 00:19:56.803 03:07:49 -- common/autotest_common.sh@930 -- # kill -0 2027887 00:19:56.803 03:07:49 -- common/autotest_common.sh@931 -- # uname 00:19:56.803 03:07:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.803 03:07:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2027887 00:19:56.803 03:07:49 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:56.803 03:07:49 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:56.803 03:07:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2027887' 00:19:56.803 killing process with pid 2027887 00:19:56.803 03:07:49 -- common/autotest_common.sh@945 -- # kill 2027887 00:19:56.803 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.803 00:19:56.803 Latency(us) 00:19:56.803 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.803 =================================================================================================================== 00:19:56.803 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.803 03:07:49 -- common/autotest_common.sh@950 -- # wait 2027887 00:19:56.803 03:07:50 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:56.803 03:07:50 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.803 03:07:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:56.803 03:07:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:56.803 03:07:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.803 03:07:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:56.803 03:07:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.803 03:07:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:56.803 03:07:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.803 03:07:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.803 03:07:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:56.803 03:07:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:19:56.803 03:07:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.803 03:07:50 -- target/tls.sh@28 -- # bdevperf_pid=2029273 00:19:56.803 03:07:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.803 03:07:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.803 03:07:50 -- target/tls.sh@31 -- # waitforlisten 2029273 /var/tmp/bdevperf.sock 00:19:56.803 03:07:50 -- common/autotest_common.sh@819 -- # '[' -z 2029273 ']' 00:19:56.803 03:07:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.803 03:07:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.803 03:07:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.803 03:07:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.803 03:07:50 -- common/autotest_common.sh@10 -- # set +x 00:19:56.803 [2024-07-14 03:07:50.165104] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:56.803 [2024-07-14 03:07:50.165211] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029273 ] 00:19:56.803 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.803 [2024-07-14 03:07:50.227047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.803 [2024-07-14 03:07:50.307800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:56.804 03:07:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.804 03:07:51 -- common/autotest_common.sh@852 -- # return 0 00:19:56.804 03:07:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:56.804 [2024-07-14 03:07:51.356114] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:56.804 [2024-07-14 03:07:51.367028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:56.804 [2024-07-14 03:07:51.367280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5a130 (107): Transport endpoint is not connected 00:19:56.804 [2024-07-14 03:07:51.368270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b5a130 (9): Bad file descriptor 00:19:56.804 [2024-07-14 03:07:51.369269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:56.804 [2024-07-14 03:07:51.369288] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:56.804 [2024-07-14 03:07:51.369315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:56.804 request: 00:19:56.804 { 00:19:56.804 "name": "TLSTEST", 00:19:56.804 "trtype": "tcp", 00:19:56.804 "traddr": "10.0.0.2", 00:19:56.804 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.804 "adrfam": "ipv4", 00:19:56.804 "trsvcid": "4420", 00:19:56.804 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.804 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:19:56.804 "method": "bdev_nvme_attach_controller", 00:19:56.804 "req_id": 1 00:19:56.804 } 00:19:56.804 Got JSON-RPC error response 00:19:56.804 response: 00:19:56.804 { 00:19:56.804 "code": -32602, 00:19:56.804 "message": "Invalid parameters" 00:19:56.804 } 00:19:56.804 03:07:51 -- target/tls.sh@36 -- # killprocess 2029273 00:19:56.804 03:07:51 -- common/autotest_common.sh@926 -- # '[' -z 2029273 ']' 00:19:56.804 03:07:51 -- common/autotest_common.sh@930 -- # kill -0 2029273 00:19:56.804 03:07:51 -- common/autotest_common.sh@931 -- # uname 00:19:56.804 03:07:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:56.804 03:07:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2029273 00:19:56.804 03:07:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:56.804 03:07:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:56.804 03:07:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2029273' 00:19:56.804 killing process with pid 2029273 00:19:56.804 03:07:51 -- common/autotest_common.sh@945 -- # kill 2029273 00:19:56.804 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.804 00:19:56.804 Latency(us) 00:19:56.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.804 =================================================================================================================== 00:19:56.804 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:56.804 03:07:51 -- common/autotest_common.sh@950 -- # wait 2029273 00:19:56.804 03:07:51 -- target/tls.sh@37 -- # return 1 00:19:56.804 03:07:51 -- common/autotest_common.sh@643 -- # es=1 00:19:56.804 03:07:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.804 03:07:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:56.804 03:07:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.804 03:07:51 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:56.804 03:07:51 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.804 03:07:51 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:56.804 03:07:51 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:56.804 03:07:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.804 03:07:51 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:56.804 03:07:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.804 03:07:51 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:56.804 03:07:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:56.804 03:07:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:56.804 03:07:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:56.804 03:07:51 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:56.804 03:07:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:56.804 03:07:51 -- target/tls.sh@28 -- # bdevperf_pid=2029524 00:19:56.804 03:07:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:56.804 03:07:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:56.804 03:07:51 -- target/tls.sh@31 -- # waitforlisten 2029524 /var/tmp/bdevperf.sock 00:19:56.804 03:07:51 -- common/autotest_common.sh@819 -- # '[' -z 2029524 ']' 00:19:56.804 03:07:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:56.804 03:07:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:56.804 03:07:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:56.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:56.804 03:07:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:56.804 03:07:51 -- common/autotest_common.sh@10 -- # set +x 00:19:56.804 [2024-07-14 03:07:51.684035] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:56.804 [2024-07-14 03:07:51.684114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029524 ] 00:19:56.804 EAL: No free 2048 kB hugepages reported on node 1 00:19:56.804 [2024-07-14 03:07:51.743256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.804 [2024-07-14 03:07:51.825822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.737 03:07:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:57.737 03:07:52 -- common/autotest_common.sh@852 -- # return 0 00:19:57.737 03:07:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:57.737 [2024-07-14 03:07:52.910946] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:57.737 [2024-07-14 03:07:52.920831] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.737 [2024-07-14 03:07:52.920885] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:57.737 [2024-07-14 03:07:52.920941] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:57.737 [2024-07-14 03:07:52.921280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf95130 (107): Transport endpoint is not connected 00:19:57.737 [2024-07-14 03:07:52.922270] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf95130 (9): Bad file descriptor 00:19:57.737 [2024-07-14 03:07:52.923268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:57.737 [2024-07-14 03:07:52.923288] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:57.737 [2024-07-14 03:07:52.923316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:57.737 request: 00:19:57.737 { 00:19:57.737 "name": "TLSTEST", 00:19:57.737 "trtype": "tcp", 00:19:57.737 "traddr": "10.0.0.2", 00:19:57.737 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:57.737 "adrfam": "ipv4", 00:19:57.737 "trsvcid": "4420", 00:19:57.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.737 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:57.737 "method": "bdev_nvme_attach_controller", 00:19:57.737 "req_id": 1 00:19:57.737 } 00:19:57.737 Got JSON-RPC error response 00:19:57.737 response: 00:19:57.737 { 00:19:57.737 "code": -32602, 00:19:57.737 "message": "Invalid parameters" 00:19:57.737 } 00:19:57.737 03:07:52 -- target/tls.sh@36 -- # killprocess 2029524 00:19:57.737 03:07:52 -- common/autotest_common.sh@926 -- # '[' -z 2029524 ']' 00:19:57.737 03:07:52 -- common/autotest_common.sh@930 -- # kill -0 2029524 00:19:57.737 03:07:52 -- common/autotest_common.sh@931 -- # uname 00:19:57.737 03:07:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.737 03:07:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2029524 00:19:57.737 03:07:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:57.737 03:07:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:57.737 03:07:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2029524' 00:19:57.737 killing process with pid 2029524 00:19:57.737 03:07:52 -- common/autotest_common.sh@945 -- # kill 2029524 00:19:57.737 Received shutdown signal, test time was about 10.000000 seconds 00:19:57.737 00:19:57.737 Latency(us) 00:19:57.737 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.737 =================================================================================================================== 00:19:57.737 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:57.737 03:07:52 -- common/autotest_common.sh@950 -- # wait 2029524 00:19:57.996 03:07:53 -- target/tls.sh@37 -- # return 1 00:19:57.996 03:07:53 -- common/autotest_common.sh@643 -- # es=1 00:19:57.996 03:07:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:57.996 03:07:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:57.996 03:07:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:57.996 03:07:53 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:57.996 03:07:53 -- common/autotest_common.sh@640 -- # local es=0 00:19:57.996 03:07:53 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:57.996 03:07:53 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:57.996 03:07:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:57.996 03:07:53 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:57.996 03:07:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:57.996 03:07:53 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:57.996 03:07:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:57.996 03:07:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:57.996 03:07:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:57.996 03:07:53 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:19:57.996 03:07:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:57.996 03:07:53 -- target/tls.sh@28 -- # bdevperf_pid=2029681 00:19:57.996 03:07:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:57.996 03:07:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:57.996 03:07:53 -- target/tls.sh@31 -- # waitforlisten 2029681 /var/tmp/bdevperf.sock 00:19:57.996 03:07:53 -- common/autotest_common.sh@819 -- # '[' -z 2029681 ']' 00:19:57.996 03:07:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:57.996 03:07:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:57.996 03:07:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:57.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:57.996 03:07:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:57.996 03:07:53 -- common/autotest_common.sh@10 -- # set +x 00:19:57.996 [2024-07-14 03:07:53.235010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:57.996 [2024-07-14 03:07:53.235089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029681 ] 00:19:58.254 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.254 [2024-07-14 03:07:53.293167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.254 [2024-07-14 03:07:53.371540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:59.185 03:07:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.185 03:07:54 -- common/autotest_common.sh@852 -- # return 0 00:19:59.185 03:07:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:59.185 [2024-07-14 03:07:54.383834] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.185 [2024-07-14 03:07:54.394688] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:59.185 [2024-07-14 03:07:54.394716] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:59.185 [2024-07-14 03:07:54.394769] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:59.185 [2024-07-14 03:07:54.395017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c130 (107): Transport endpoint is not connected 00:19:59.185 [2024-07-14 03:07:54.396007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c130 (9): Bad file descriptor 00:19:59.185 [2024-07-14 03:07:54.397006] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:59.186 [2024-07-14 03:07:54.397026] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:59.186 [2024-07-14 03:07:54.397040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:59.186 request: 00:19:59.186 { 00:19:59.186 "name": "TLSTEST", 00:19:59.186 "trtype": "tcp", 00:19:59.186 "traddr": "10.0.0.2", 00:19:59.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:59.186 "adrfam": "ipv4", 00:19:59.186 "trsvcid": "4420", 00:19:59.186 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:59.186 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:19:59.186 "method": "bdev_nvme_attach_controller", 00:19:59.186 "req_id": 1 00:19:59.186 } 00:19:59.186 Got JSON-RPC error response 00:19:59.186 response: 00:19:59.186 { 00:19:59.186 "code": -32602, 00:19:59.186 "message": "Invalid parameters" 00:19:59.186 } 00:19:59.186 03:07:54 -- target/tls.sh@36 -- # killprocess 2029681 00:19:59.186 03:07:54 -- common/autotest_common.sh@926 -- # '[' -z 2029681 ']' 00:19:59.186 03:07:54 -- common/autotest_common.sh@930 -- # kill -0 2029681 00:19:59.186 03:07:54 -- common/autotest_common.sh@931 -- # uname 00:19:59.186 03:07:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:59.186 03:07:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2029681 00:19:59.186 03:07:54 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:59.186 03:07:54 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:59.444 03:07:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2029681' 00:19:59.444 killing process with pid 2029681 00:19:59.444 03:07:54 -- common/autotest_common.sh@945 -- # kill 2029681 00:19:59.444 Received shutdown signal, test time was about 10.000000 seconds 00:19:59.444 00:19:59.444 Latency(us) 00:19:59.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:59.444 =================================================================================================================== 00:19:59.444 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:59.444 03:07:54 -- common/autotest_common.sh@950 -- # wait 2029681 00:19:59.444 03:07:54 -- target/tls.sh@37 -- # return 1 00:19:59.444 03:07:54 -- common/autotest_common.sh@643 -- # es=1 00:19:59.444 03:07:54 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:59.444 03:07:54 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:59.444 03:07:54 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:59.444 03:07:54 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.444 03:07:54 -- common/autotest_common.sh@640 -- # local es=0 00:19:59.444 03:07:54 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.444 03:07:54 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:59.444 03:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.444 03:07:54 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:59.444 03:07:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:59.444 03:07:54 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:59.444 03:07:54 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:59.444 03:07:54 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:59.444 03:07:54 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:59.444 03:07:54 -- target/tls.sh@23 -- # psk= 00:19:59.444 03:07:54 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:59.444 03:07:54 -- target/tls.sh@28 -- # bdevperf_pid=2029834 00:19:59.444 03:07:54 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:59.444 03:07:54 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:59.444 03:07:54 -- target/tls.sh@31 -- # waitforlisten 2029834 /var/tmp/bdevperf.sock 00:19:59.444 03:07:54 -- common/autotest_common.sh@819 -- # '[' -z 2029834 ']' 00:19:59.444 03:07:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:59.444 03:07:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:59.444 03:07:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:59.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:59.444 03:07:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:59.444 03:07:54 -- common/autotest_common.sh@10 -- # set +x 00:19:59.444 [2024-07-14 03:07:54.669464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:59.444 [2024-07-14 03:07:54.669541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2029834 ] 00:19:59.703 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.703 [2024-07-14 03:07:54.732887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.703 [2024-07-14 03:07:54.819491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.637 03:07:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:00.637 03:07:55 -- common/autotest_common.sh@852 -- # return 0 00:20:00.637 03:07:55 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:00.896 [2024-07-14 03:07:55.899093] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:00.896 [2024-07-14 03:07:55.900974] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1694810 (9): Bad file descriptor 00:20:00.896 [2024-07-14 03:07:55.901969] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.896 [2024-07-14 03:07:55.901990] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:00.896 [2024-07-14 03:07:55.902004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.896 request: 00:20:00.896 { 00:20:00.896 "name": "TLSTEST", 00:20:00.896 "trtype": "tcp", 00:20:00.896 "traddr": "10.0.0.2", 00:20:00.896 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.896 "adrfam": "ipv4", 00:20:00.896 "trsvcid": "4420", 00:20:00.896 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.896 "method": "bdev_nvme_attach_controller", 00:20:00.896 "req_id": 1 00:20:00.896 } 00:20:00.896 Got JSON-RPC error response 00:20:00.896 response: 00:20:00.896 { 00:20:00.896 "code": -32602, 00:20:00.896 "message": "Invalid parameters" 00:20:00.896 } 00:20:00.896 03:07:55 -- target/tls.sh@36 -- # killprocess 2029834 00:20:00.896 03:07:55 -- common/autotest_common.sh@926 -- # '[' -z 2029834 ']' 00:20:00.896 03:07:55 -- common/autotest_common.sh@930 -- # kill -0 2029834 00:20:00.896 03:07:55 -- common/autotest_common.sh@931 -- # uname 00:20:00.896 03:07:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.896 03:07:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2029834 00:20:00.896 03:07:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:00.896 03:07:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:00.896 03:07:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2029834' 00:20:00.896 killing process with pid 2029834 00:20:00.896 03:07:55 -- common/autotest_common.sh@945 -- # kill 2029834 00:20:00.896 Received shutdown signal, test time was about 10.000000 seconds 00:20:00.896 00:20:00.896 Latency(us) 00:20:00.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.896 =================================================================================================================== 00:20:00.896 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:00.896 03:07:55 -- common/autotest_common.sh@950 -- # wait 2029834 00:20:01.154 03:07:56 -- target/tls.sh@37 -- # return 1 00:20:01.154 03:07:56 -- common/autotest_common.sh@643 -- # es=1 00:20:01.154 03:07:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:01.154 03:07:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:01.154 03:07:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:01.154 03:07:56 -- target/tls.sh@167 -- # killprocess 2026053 00:20:01.154 03:07:56 -- common/autotest_common.sh@926 -- # '[' -z 2026053 ']' 00:20:01.154 03:07:56 -- common/autotest_common.sh@930 -- # kill -0 2026053 00:20:01.154 03:07:56 -- common/autotest_common.sh@931 -- # uname 00:20:01.154 03:07:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:01.154 03:07:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2026053 00:20:01.154 03:07:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:01.154 03:07:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:01.154 03:07:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2026053' 00:20:01.154 killing process with pid 2026053 00:20:01.154 03:07:56 -- common/autotest_common.sh@945 -- # kill 2026053 00:20:01.154 03:07:56 -- common/autotest_common.sh@950 -- # wait 2026053 00:20:01.412 03:07:56 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:01.412 03:07:56 -- target/tls.sh@49 -- # local key hash crc 00:20:01.412 03:07:56 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:01.412 03:07:56 -- target/tls.sh@51 -- # hash=02 00:20:01.412 03:07:56 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:01.412 03:07:56 -- target/tls.sh@52 -- # gzip -1 -c 00:20:01.412 03:07:56 -- target/tls.sh@52 -- # tail -c8 00:20:01.412 03:07:56 -- target/tls.sh@52 -- # head -c 4 00:20:01.412 03:07:56 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:01.412 03:07:56 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:01.412 03:07:56 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:01.412 03:07:56 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:01.412 03:07:56 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:01.412 03:07:56 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:01.412 03:07:56 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:01.412 03:07:56 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:01.412 03:07:56 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:01.412 03:07:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:01.412 03:07:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:01.412 03:07:56 -- common/autotest_common.sh@10 -- # set +x 00:20:01.412 03:07:56 -- nvmf/common.sh@469 -- # nvmfpid=2030123 00:20:01.412 03:07:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:01.412 03:07:56 -- nvmf/common.sh@470 -- # waitforlisten 2030123 00:20:01.412 03:07:56 -- common/autotest_common.sh@819 -- # '[' -z 2030123 ']' 00:20:01.412 03:07:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.412 03:07:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:01.412 03:07:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.412 03:07:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:01.412 03:07:56 -- common/autotest_common.sh@10 -- # set +x 00:20:01.412 [2024-07-14 03:07:56.508496] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:01.412 [2024-07-14 03:07:56.508590] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.412 [2024-07-14 03:07:56.572719] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.412 [2024-07-14 03:07:56.658678] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:01.412 [2024-07-14 03:07:56.658843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.412 [2024-07-14 03:07:56.658859] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.413 [2024-07-14 03:07:56.658906] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.413 [2024-07-14 03:07:56.658945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:02.348 03:07:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:02.348 03:07:57 -- common/autotest_common.sh@852 -- # return 0 00:20:02.348 03:07:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:02.348 03:07:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:02.348 03:07:57 -- common/autotest_common.sh@10 -- # set +x 00:20:02.348 03:07:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.348 03:07:57 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:02.348 03:07:57 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:02.348 03:07:57 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:02.606 [2024-07-14 03:07:57.695078] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.606 03:07:57 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:02.864 03:07:57 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:03.122 [2024-07-14 03:07:58.156366] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:03.122 [2024-07-14 03:07:58.156607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.122 03:07:58 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:03.380 malloc0 00:20:03.380 03:07:58 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:03.639 03:07:58 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:03.898 03:07:58 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:03.898 03:07:58 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:03.898 03:07:58 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:03.898 03:07:58 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:03.898 03:07:58 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:03.898 03:07:58 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:03.898 03:07:58 -- target/tls.sh@28 -- # bdevperf_pid=2030415 00:20:03.898 03:07:58 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:03.898 03:07:58 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:03.898 03:07:58 -- target/tls.sh@31 -- # waitforlisten 2030415 /var/tmp/bdevperf.sock 00:20:03.898 03:07:58 -- common/autotest_common.sh@819 -- # '[' -z 2030415 ']' 00:20:03.898 03:07:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.898 03:07:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:03.898 03:07:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.898 03:07:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:03.898 03:07:58 -- common/autotest_common.sh@10 -- # set +x 00:20:03.898 [2024-07-14 03:07:58.967716] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:03.898 [2024-07-14 03:07:58.967789] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2030415 ] 00:20:03.898 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.898 [2024-07-14 03:07:59.025876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.898 [2024-07-14 03:07:59.110469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.830 03:07:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:04.830 03:07:59 -- common/autotest_common.sh@852 -- # return 0 00:20:04.830 03:07:59 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:05.086 [2024-07-14 03:08:00.127437] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.086 TLSTESTn1 00:20:05.086 03:08:00 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:05.086 Running I/O for 10 seconds... 00:20:17.278 00:20:17.278 Latency(us) 00:20:17.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.278 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.278 Verification LBA range: start 0x0 length 0x2000 00:20:17.278 TLSTESTn1 : 10.03 1989.29 7.77 0.00 0.00 64247.01 4781.70 64856.37 00:20:17.278 =================================================================================================================== 00:20:17.278 Total : 1989.29 7.77 0.00 0.00 64247.01 4781.70 64856.37 00:20:17.278 0 00:20:17.278 03:08:10 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.278 03:08:10 -- target/tls.sh@45 -- # killprocess 2030415 00:20:17.278 03:08:10 -- common/autotest_common.sh@926 -- # '[' -z 2030415 ']' 00:20:17.278 03:08:10 -- common/autotest_common.sh@930 -- # kill -0 2030415 00:20:17.278 03:08:10 -- common/autotest_common.sh@931 -- # uname 00:20:17.278 03:08:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.278 03:08:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2030415 00:20:17.278 03:08:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.278 03:08:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.278 03:08:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2030415' 00:20:17.278 killing process with pid 2030415 00:20:17.278 03:08:10 -- common/autotest_common.sh@945 -- # kill 2030415 00:20:17.278 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.278 00:20:17.278 Latency(us) 00:20:17.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.278 =================================================================================================================== 00:20:17.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.278 03:08:10 -- common/autotest_common.sh@950 -- # wait 2030415 00:20:17.278 03:08:10 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.278 03:08:10 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.278 03:08:10 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.278 03:08:10 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.278 03:08:10 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.278 03:08:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.278 03:08:10 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.278 03:08:10 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.278 03:08:10 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.278 03:08:10 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.278 03:08:10 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.278 03:08:10 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.278 03:08:10 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:17.278 03:08:10 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.278 03:08:10 -- target/tls.sh@28 -- # bdevperf_pid=2031906 00:20:17.278 03:08:10 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.278 03:08:10 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.278 03:08:10 -- target/tls.sh@31 -- # waitforlisten 2031906 /var/tmp/bdevperf.sock 00:20:17.278 03:08:10 -- common/autotest_common.sh@819 -- # '[' -z 2031906 ']' 00:20:17.278 03:08:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.278 03:08:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.278 03:08:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.278 03:08:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.278 03:08:10 -- common/autotest_common.sh@10 -- # set +x 00:20:17.278 [2024-07-14 03:08:10.670651] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:17.278 [2024-07-14 03:08:10.670729] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2031906 ] 00:20:17.278 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.278 [2024-07-14 03:08:10.728759] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.278 [2024-07-14 03:08:10.811722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:17.278 03:08:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:17.278 03:08:11 -- common/autotest_common.sh@852 -- # return 0 00:20:17.278 03:08:11 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:17.278 [2024-07-14 03:08:11.904704] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:17.278 [2024-07-14 03:08:11.904772] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:17.278 request: 00:20:17.278 { 00:20:17.278 "name": "TLSTEST", 00:20:17.278 "trtype": "tcp", 00:20:17.278 "traddr": "10.0.0.2", 00:20:17.278 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:17.278 "adrfam": "ipv4", 00:20:17.278 "trsvcid": "4420", 00:20:17.278 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:17.278 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:17.278 "method": "bdev_nvme_attach_controller", 00:20:17.278 "req_id": 1 00:20:17.278 } 00:20:17.278 Got JSON-RPC error response 00:20:17.278 response: 00:20:17.278 { 00:20:17.278 "code": -22, 00:20:17.278 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:17.278 } 00:20:17.278 03:08:11 -- target/tls.sh@36 -- # killprocess 2031906 00:20:17.278 03:08:11 -- common/autotest_common.sh@926 -- # '[' -z 2031906 ']' 00:20:17.278 03:08:11 -- common/autotest_common.sh@930 -- # kill -0 2031906 00:20:17.278 03:08:11 -- common/autotest_common.sh@931 -- # uname 00:20:17.278 03:08:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.278 03:08:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2031906 00:20:17.278 03:08:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.278 03:08:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.278 03:08:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2031906' 00:20:17.278 killing process with pid 2031906 00:20:17.278 03:08:11 -- common/autotest_common.sh@945 -- # kill 2031906 00:20:17.278 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.278 00:20:17.278 Latency(us) 00:20:17.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.278 =================================================================================================================== 00:20:17.278 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:17.278 03:08:11 -- common/autotest_common.sh@950 -- # wait 2031906 00:20:17.278 03:08:12 -- target/tls.sh@37 -- # return 1 00:20:17.278 03:08:12 -- common/autotest_common.sh@643 -- # es=1 00:20:17.278 03:08:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:17.278 03:08:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:17.278 03:08:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:17.278 03:08:12 -- target/tls.sh@183 -- # killprocess 2030123 00:20:17.278 03:08:12 -- common/autotest_common.sh@926 -- # '[' -z 2030123 ']' 00:20:17.278 03:08:12 -- common/autotest_common.sh@930 -- # kill -0 2030123 00:20:17.278 03:08:12 -- common/autotest_common.sh@931 -- # uname 00:20:17.278 03:08:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.278 03:08:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2030123 00:20:17.278 03:08:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:17.278 03:08:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:17.278 03:08:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2030123' 00:20:17.278 killing process with pid 2030123 00:20:17.278 03:08:12 -- common/autotest_common.sh@945 -- # kill 2030123 00:20:17.278 03:08:12 -- common/autotest_common.sh@950 -- # wait 2030123 00:20:17.278 03:08:12 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:17.278 03:08:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:17.278 03:08:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:17.278 03:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:17.278 03:08:12 -- nvmf/common.sh@469 -- # nvmfpid=2032067 00:20:17.278 03:08:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:17.279 03:08:12 -- nvmf/common.sh@470 -- # waitforlisten 2032067 00:20:17.279 03:08:12 -- common/autotest_common.sh@819 -- # '[' -z 2032067 ']' 00:20:17.279 03:08:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.279 03:08:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.279 03:08:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.279 03:08:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.279 03:08:12 -- common/autotest_common.sh@10 -- # set +x 00:20:17.279 [2024-07-14 03:08:12.490538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:17.279 [2024-07-14 03:08:12.490613] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.279 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.537 [2024-07-14 03:08:12.556085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.537 [2024-07-14 03:08:12.642137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:17.537 [2024-07-14 03:08:12.642315] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.537 [2024-07-14 03:08:12.642333] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.537 [2024-07-14 03:08:12.642345] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.537 [2024-07-14 03:08:12.642377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.529 03:08:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.529 03:08:13 -- common/autotest_common.sh@852 -- # return 0 00:20:18.529 03:08:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:18.529 03:08:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:18.529 03:08:13 -- common/autotest_common.sh@10 -- # set +x 00:20:18.529 03:08:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:18.529 03:08:13 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.529 03:08:13 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.529 03:08:13 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.529 03:08:13 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:18.529 03:08:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.529 03:08:13 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:18.529 03:08:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.529 03:08:13 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.529 03:08:13 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:18.529 03:08:13 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:18.529 [2024-07-14 03:08:13.729859] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:18.529 03:08:13 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:18.787 03:08:13 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:19.045 [2024-07-14 03:08:14.207164] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:19.045 [2024-07-14 03:08:14.207407] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.045 03:08:14 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:19.303 malloc0 00:20:19.303 03:08:14 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:19.561 03:08:14 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:19.819 [2024-07-14 03:08:14.900645] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:19.819 [2024-07-14 03:08:14.900688] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:19.819 [2024-07-14 03:08:14.900713] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:19.819 request: 00:20:19.819 { 00:20:19.819 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.819 "host": "nqn.2016-06.io.spdk:host1", 00:20:19.819 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:19.819 "method": "nvmf_subsystem_add_host", 00:20:19.819 "req_id": 1 00:20:19.819 } 00:20:19.819 Got JSON-RPC error response 00:20:19.819 response: 00:20:19.819 { 00:20:19.819 "code": -32603, 00:20:19.819 "message": "Internal error" 00:20:19.819 } 00:20:19.819 03:08:14 -- common/autotest_common.sh@643 -- # es=1 00:20:19.819 03:08:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.819 03:08:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.819 03:08:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.819 03:08:14 -- target/tls.sh@189 -- # killprocess 2032067 00:20:19.819 03:08:14 -- common/autotest_common.sh@926 -- # '[' -z 2032067 ']' 00:20:19.819 03:08:14 -- common/autotest_common.sh@930 -- # kill -0 2032067 00:20:19.819 03:08:14 -- common/autotest_common.sh@931 -- # uname 00:20:19.819 03:08:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.819 03:08:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2032067 00:20:19.819 03:08:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:19.819 03:08:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:19.819 03:08:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2032067' 00:20:19.819 killing process with pid 2032067 00:20:19.819 03:08:14 -- common/autotest_common.sh@945 -- # kill 2032067 00:20:19.819 03:08:14 -- common/autotest_common.sh@950 -- # wait 2032067 00:20:20.077 03:08:15 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:20.077 03:08:15 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:20.077 03:08:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:20.077 03:08:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:20.077 03:08:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.077 03:08:15 -- nvmf/common.sh@469 -- # nvmfpid=2032500 00:20:20.077 03:08:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:20.077 03:08:15 -- nvmf/common.sh@470 -- # waitforlisten 2032500 00:20:20.077 03:08:15 -- common/autotest_common.sh@819 -- # '[' -z 2032500 ']' 00:20:20.077 03:08:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.077 03:08:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:20.077 03:08:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.077 03:08:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:20.077 03:08:15 -- common/autotest_common.sh@10 -- # set +x 00:20:20.077 [2024-07-14 03:08:15.243778] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:20.077 [2024-07-14 03:08:15.243890] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.077 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.077 [2024-07-14 03:08:15.310417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.336 [2024-07-14 03:08:15.401387] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:20.336 [2024-07-14 03:08:15.401558] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.336 [2024-07-14 03:08:15.401576] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.336 [2024-07-14 03:08:15.401588] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.336 [2024-07-14 03:08:15.401616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.270 03:08:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.270 03:08:16 -- common/autotest_common.sh@852 -- # return 0 00:20:21.270 03:08:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:21.270 03:08:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:21.270 03:08:16 -- common/autotest_common.sh@10 -- # set +x 00:20:21.270 03:08:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.270 03:08:16 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.270 03:08:16 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:21.270 03:08:16 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:21.270 [2024-07-14 03:08:16.447380] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.270 03:08:16 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:21.528 03:08:16 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:21.786 [2024-07-14 03:08:16.968770] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.786 [2024-07-14 03:08:16.969015] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.786 03:08:16 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:22.045 malloc0 00:20:22.045 03:08:17 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:22.303 03:08:17 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:22.560 03:08:17 -- target/tls.sh@197 -- # bdevperf_pid=2032802 00:20:22.560 03:08:17 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:22.560 03:08:17 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:22.560 03:08:17 -- target/tls.sh@200 -- # waitforlisten 2032802 /var/tmp/bdevperf.sock 00:20:22.560 03:08:17 -- common/autotest_common.sh@819 -- # '[' -z 2032802 ']' 00:20:22.561 03:08:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:22.561 03:08:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:22.561 03:08:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:22.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:22.561 03:08:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:22.561 03:08:17 -- common/autotest_common.sh@10 -- # set +x 00:20:22.561 [2024-07-14 03:08:17.723106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:22.561 [2024-07-14 03:08:17.723190] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2032802 ] 00:20:22.561 EAL: No free 2048 kB hugepages reported on node 1 00:20:22.561 [2024-07-14 03:08:17.779521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:22.819 [2024-07-14 03:08:17.861262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:23.752 03:08:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:23.752 03:08:18 -- common/autotest_common.sh@852 -- # return 0 00:20:23.752 03:08:18 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.752 [2024-07-14 03:08:18.854802] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:23.752 TLSTESTn1 00:20:23.752 03:08:18 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:24.011 03:08:19 -- target/tls.sh@205 -- # tgtconf='{ 00:20:24.011 "subsystems": [ 00:20:24.011 { 00:20:24.011 "subsystem": "iobuf", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "iobuf_set_options", 00:20:24.011 "params": { 00:20:24.011 "small_pool_count": 8192, 00:20:24.011 "large_pool_count": 1024, 00:20:24.011 "small_bufsize": 8192, 00:20:24.011 "large_bufsize": 135168 00:20:24.011 } 00:20:24.011 } 00:20:24.011 ] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "sock", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "sock_impl_set_options", 00:20:24.011 "params": { 00:20:24.011 "impl_name": "posix", 00:20:24.011 "recv_buf_size": 2097152, 00:20:24.011 "send_buf_size": 2097152, 00:20:24.011 "enable_recv_pipe": true, 00:20:24.011 "enable_quickack": false, 00:20:24.011 "enable_placement_id": 0, 00:20:24.011 "enable_zerocopy_send_server": true, 00:20:24.011 "enable_zerocopy_send_client": false, 00:20:24.011 "zerocopy_threshold": 0, 00:20:24.011 "tls_version": 0, 00:20:24.011 "enable_ktls": false 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "sock_impl_set_options", 00:20:24.011 "params": { 00:20:24.011 "impl_name": "ssl", 00:20:24.011 "recv_buf_size": 4096, 00:20:24.011 "send_buf_size": 4096, 00:20:24.011 "enable_recv_pipe": true, 00:20:24.011 "enable_quickack": false, 00:20:24.011 "enable_placement_id": 0, 00:20:24.011 "enable_zerocopy_send_server": true, 00:20:24.011 "enable_zerocopy_send_client": false, 00:20:24.011 "zerocopy_threshold": 0, 00:20:24.011 "tls_version": 0, 00:20:24.011 "enable_ktls": false 00:20:24.011 } 00:20:24.011 } 00:20:24.011 ] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "vmd", 00:20:24.011 "config": [] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "accel", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "accel_set_options", 00:20:24.011 "params": { 00:20:24.011 "small_cache_size": 128, 00:20:24.011 "large_cache_size": 16, 00:20:24.011 "task_count": 2048, 00:20:24.011 "sequence_count": 2048, 00:20:24.011 "buf_count": 2048 00:20:24.011 } 00:20:24.011 } 00:20:24.011 ] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "bdev", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "bdev_set_options", 00:20:24.011 "params": { 00:20:24.011 "bdev_io_pool_size": 65535, 00:20:24.011 "bdev_io_cache_size": 256, 00:20:24.011 "bdev_auto_examine": true, 00:20:24.011 "iobuf_small_cache_size": 128, 00:20:24.011 "iobuf_large_cache_size": 16 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_raid_set_options", 00:20:24.011 "params": { 00:20:24.011 "process_window_size_kb": 1024 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_iscsi_set_options", 00:20:24.011 "params": { 00:20:24.011 "timeout_sec": 30 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_nvme_set_options", 00:20:24.011 "params": { 00:20:24.011 "action_on_timeout": "none", 00:20:24.011 "timeout_us": 0, 00:20:24.011 "timeout_admin_us": 0, 00:20:24.011 "keep_alive_timeout_ms": 10000, 00:20:24.011 "transport_retry_count": 4, 00:20:24.011 "arbitration_burst": 0, 00:20:24.011 "low_priority_weight": 0, 00:20:24.011 "medium_priority_weight": 0, 00:20:24.011 "high_priority_weight": 0, 00:20:24.011 "nvme_adminq_poll_period_us": 10000, 00:20:24.011 "nvme_ioq_poll_period_us": 0, 00:20:24.011 "io_queue_requests": 0, 00:20:24.011 "delay_cmd_submit": true, 00:20:24.011 "bdev_retry_count": 3, 00:20:24.011 "transport_ack_timeout": 0, 00:20:24.011 "ctrlr_loss_timeout_sec": 0, 00:20:24.011 "reconnect_delay_sec": 0, 00:20:24.011 "fast_io_fail_timeout_sec": 0, 00:20:24.011 "generate_uuids": false, 00:20:24.011 "transport_tos": 0, 00:20:24.011 "io_path_stat": false, 00:20:24.011 "allow_accel_sequence": false 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_nvme_set_hotplug", 00:20:24.011 "params": { 00:20:24.011 "period_us": 100000, 00:20:24.011 "enable": false 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_malloc_create", 00:20:24.011 "params": { 00:20:24.011 "name": "malloc0", 00:20:24.011 "num_blocks": 8192, 00:20:24.011 "block_size": 4096, 00:20:24.011 "physical_block_size": 4096, 00:20:24.011 "uuid": "f52c4e76-c598-4319-9f62-0be10467c5ec", 00:20:24.011 "optimal_io_boundary": 0 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "bdev_wait_for_examine" 00:20:24.011 } 00:20:24.011 ] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "nbd", 00:20:24.011 "config": [] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "scheduler", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "framework_set_scheduler", 00:20:24.011 "params": { 00:20:24.011 "name": "static" 00:20:24.011 } 00:20:24.011 } 00:20:24.011 ] 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "subsystem": "nvmf", 00:20:24.011 "config": [ 00:20:24.011 { 00:20:24.011 "method": "nvmf_set_config", 00:20:24.011 "params": { 00:20:24.011 "discovery_filter": "match_any", 00:20:24.011 "admin_cmd_passthru": { 00:20:24.011 "identify_ctrlr": false 00:20:24.011 } 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "nvmf_set_max_subsystems", 00:20:24.011 "params": { 00:20:24.011 "max_subsystems": 1024 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "nvmf_set_crdt", 00:20:24.011 "params": { 00:20:24.011 "crdt1": 0, 00:20:24.011 "crdt2": 0, 00:20:24.011 "crdt3": 0 00:20:24.011 } 00:20:24.011 }, 00:20:24.011 { 00:20:24.011 "method": "nvmf_create_transport", 00:20:24.011 "params": { 00:20:24.011 "trtype": "TCP", 00:20:24.011 "max_queue_depth": 128, 00:20:24.011 "max_io_qpairs_per_ctrlr": 127, 00:20:24.011 "in_capsule_data_size": 4096, 00:20:24.012 "max_io_size": 131072, 00:20:24.012 "io_unit_size": 131072, 00:20:24.012 "max_aq_depth": 128, 00:20:24.012 "num_shared_buffers": 511, 00:20:24.012 "buf_cache_size": 4294967295, 00:20:24.012 "dif_insert_or_strip": false, 00:20:24.012 "zcopy": false, 00:20:24.012 "c2h_success": false, 00:20:24.012 "sock_priority": 0, 00:20:24.012 "abort_timeout_sec": 1 00:20:24.012 } 00:20:24.012 }, 00:20:24.012 { 00:20:24.012 "method": "nvmf_create_subsystem", 00:20:24.012 "params": { 00:20:24.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.012 "allow_any_host": false, 00:20:24.012 "serial_number": "SPDK00000000000001", 00:20:24.012 "model_number": "SPDK bdev Controller", 00:20:24.012 "max_namespaces": 10, 00:20:24.012 "min_cntlid": 1, 00:20:24.012 "max_cntlid": 65519, 00:20:24.012 "ana_reporting": false 00:20:24.012 } 00:20:24.012 }, 00:20:24.012 { 00:20:24.012 "method": "nvmf_subsystem_add_host", 00:20:24.012 "params": { 00:20:24.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.012 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.012 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:24.012 } 00:20:24.012 }, 00:20:24.012 { 00:20:24.012 "method": "nvmf_subsystem_add_ns", 00:20:24.012 "params": { 00:20:24.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.012 "namespace": { 00:20:24.012 "nsid": 1, 00:20:24.012 "bdev_name": "malloc0", 00:20:24.012 "nguid": "F52C4E76C59843199F620BE10467C5EC", 00:20:24.012 "uuid": "f52c4e76-c598-4319-9f62-0be10467c5ec" 00:20:24.012 } 00:20:24.012 } 00:20:24.012 }, 00:20:24.012 { 00:20:24.012 "method": "nvmf_subsystem_add_listener", 00:20:24.012 "params": { 00:20:24.012 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.012 "listen_address": { 00:20:24.012 "trtype": "TCP", 00:20:24.012 "adrfam": "IPv4", 00:20:24.012 "traddr": "10.0.0.2", 00:20:24.012 "trsvcid": "4420" 00:20:24.012 }, 00:20:24.012 "secure_channel": true 00:20:24.012 } 00:20:24.012 } 00:20:24.012 ] 00:20:24.012 } 00:20:24.012 ] 00:20:24.012 }' 00:20:24.012 03:08:19 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:24.579 03:08:19 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:24.579 "subsystems": [ 00:20:24.579 { 00:20:24.579 "subsystem": "iobuf", 00:20:24.579 "config": [ 00:20:24.579 { 00:20:24.579 "method": "iobuf_set_options", 00:20:24.579 "params": { 00:20:24.579 "small_pool_count": 8192, 00:20:24.579 "large_pool_count": 1024, 00:20:24.579 "small_bufsize": 8192, 00:20:24.579 "large_bufsize": 135168 00:20:24.579 } 00:20:24.579 } 00:20:24.579 ] 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "subsystem": "sock", 00:20:24.579 "config": [ 00:20:24.579 { 00:20:24.579 "method": "sock_impl_set_options", 00:20:24.579 "params": { 00:20:24.579 "impl_name": "posix", 00:20:24.579 "recv_buf_size": 2097152, 00:20:24.579 "send_buf_size": 2097152, 00:20:24.579 "enable_recv_pipe": true, 00:20:24.579 "enable_quickack": false, 00:20:24.579 "enable_placement_id": 0, 00:20:24.579 "enable_zerocopy_send_server": true, 00:20:24.579 "enable_zerocopy_send_client": false, 00:20:24.579 "zerocopy_threshold": 0, 00:20:24.579 "tls_version": 0, 00:20:24.579 "enable_ktls": false 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "sock_impl_set_options", 00:20:24.579 "params": { 00:20:24.579 "impl_name": "ssl", 00:20:24.579 "recv_buf_size": 4096, 00:20:24.579 "send_buf_size": 4096, 00:20:24.579 "enable_recv_pipe": true, 00:20:24.579 "enable_quickack": false, 00:20:24.579 "enable_placement_id": 0, 00:20:24.579 "enable_zerocopy_send_server": true, 00:20:24.579 "enable_zerocopy_send_client": false, 00:20:24.579 "zerocopy_threshold": 0, 00:20:24.579 "tls_version": 0, 00:20:24.579 "enable_ktls": false 00:20:24.579 } 00:20:24.579 } 00:20:24.579 ] 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "subsystem": "vmd", 00:20:24.579 "config": [] 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "subsystem": "accel", 00:20:24.579 "config": [ 00:20:24.579 { 00:20:24.579 "method": "accel_set_options", 00:20:24.579 "params": { 00:20:24.579 "small_cache_size": 128, 00:20:24.579 "large_cache_size": 16, 00:20:24.579 "task_count": 2048, 00:20:24.579 "sequence_count": 2048, 00:20:24.579 "buf_count": 2048 00:20:24.579 } 00:20:24.579 } 00:20:24.579 ] 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "subsystem": "bdev", 00:20:24.579 "config": [ 00:20:24.579 { 00:20:24.579 "method": "bdev_set_options", 00:20:24.579 "params": { 00:20:24.579 "bdev_io_pool_size": 65535, 00:20:24.579 "bdev_io_cache_size": 256, 00:20:24.579 "bdev_auto_examine": true, 00:20:24.579 "iobuf_small_cache_size": 128, 00:20:24.579 "iobuf_large_cache_size": 16 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_raid_set_options", 00:20:24.579 "params": { 00:20:24.579 "process_window_size_kb": 1024 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_iscsi_set_options", 00:20:24.579 "params": { 00:20:24.579 "timeout_sec": 30 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_nvme_set_options", 00:20:24.579 "params": { 00:20:24.579 "action_on_timeout": "none", 00:20:24.579 "timeout_us": 0, 00:20:24.579 "timeout_admin_us": 0, 00:20:24.579 "keep_alive_timeout_ms": 10000, 00:20:24.579 "transport_retry_count": 4, 00:20:24.579 "arbitration_burst": 0, 00:20:24.579 "low_priority_weight": 0, 00:20:24.579 "medium_priority_weight": 0, 00:20:24.579 "high_priority_weight": 0, 00:20:24.579 "nvme_adminq_poll_period_us": 10000, 00:20:24.579 "nvme_ioq_poll_period_us": 0, 00:20:24.579 "io_queue_requests": 512, 00:20:24.579 "delay_cmd_submit": true, 00:20:24.579 "bdev_retry_count": 3, 00:20:24.579 "transport_ack_timeout": 0, 00:20:24.579 "ctrlr_loss_timeout_sec": 0, 00:20:24.579 "reconnect_delay_sec": 0, 00:20:24.579 "fast_io_fail_timeout_sec": 0, 00:20:24.579 "generate_uuids": false, 00:20:24.579 "transport_tos": 0, 00:20:24.579 "io_path_stat": false, 00:20:24.579 "allow_accel_sequence": false 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_nvme_attach_controller", 00:20:24.579 "params": { 00:20:24.579 "name": "TLSTEST", 00:20:24.579 "trtype": "TCP", 00:20:24.579 "adrfam": "IPv4", 00:20:24.579 "traddr": "10.0.0.2", 00:20:24.579 "trsvcid": "4420", 00:20:24.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.579 "prchk_reftag": false, 00:20:24.579 "prchk_guard": false, 00:20:24.579 "ctrlr_loss_timeout_sec": 0, 00:20:24.579 "reconnect_delay_sec": 0, 00:20:24.579 "fast_io_fail_timeout_sec": 0, 00:20:24.579 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:24.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:24.579 "hdgst": false, 00:20:24.579 "ddgst": false 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_nvme_set_hotplug", 00:20:24.579 "params": { 00:20:24.579 "period_us": 100000, 00:20:24.579 "enable": false 00:20:24.579 } 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "method": "bdev_wait_for_examine" 00:20:24.579 } 00:20:24.579 ] 00:20:24.579 }, 00:20:24.579 { 00:20:24.579 "subsystem": "nbd", 00:20:24.579 "config": [] 00:20:24.579 } 00:20:24.579 ] 00:20:24.579 }' 00:20:24.579 03:08:19 -- target/tls.sh@208 -- # killprocess 2032802 00:20:24.579 03:08:19 -- common/autotest_common.sh@926 -- # '[' -z 2032802 ']' 00:20:24.579 03:08:19 -- common/autotest_common.sh@930 -- # kill -0 2032802 00:20:24.579 03:08:19 -- common/autotest_common.sh@931 -- # uname 00:20:24.579 03:08:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:24.579 03:08:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2032802 00:20:24.579 03:08:19 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:24.579 03:08:19 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:24.579 03:08:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2032802' 00:20:24.579 killing process with pid 2032802 00:20:24.579 03:08:19 -- common/autotest_common.sh@945 -- # kill 2032802 00:20:24.579 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.579 00:20:24.579 Latency(us) 00:20:24.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.579 =================================================================================================================== 00:20:24.579 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:24.579 03:08:19 -- common/autotest_common.sh@950 -- # wait 2032802 00:20:24.579 03:08:19 -- target/tls.sh@209 -- # killprocess 2032500 00:20:24.580 03:08:19 -- common/autotest_common.sh@926 -- # '[' -z 2032500 ']' 00:20:24.580 03:08:19 -- common/autotest_common.sh@930 -- # kill -0 2032500 00:20:24.580 03:08:19 -- common/autotest_common.sh@931 -- # uname 00:20:24.580 03:08:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:24.580 03:08:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2032500 00:20:24.580 03:08:19 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:24.580 03:08:19 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:24.580 03:08:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2032500' 00:20:24.580 killing process with pid 2032500 00:20:24.580 03:08:19 -- common/autotest_common.sh@945 -- # kill 2032500 00:20:24.580 03:08:19 -- common/autotest_common.sh@950 -- # wait 2032500 00:20:24.838 03:08:20 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:24.838 03:08:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:24.838 03:08:20 -- target/tls.sh@212 -- # echo '{ 00:20:24.838 "subsystems": [ 00:20:24.838 { 00:20:24.838 "subsystem": "iobuf", 00:20:24.838 "config": [ 00:20:24.838 { 00:20:24.838 "method": "iobuf_set_options", 00:20:24.838 "params": { 00:20:24.838 "small_pool_count": 8192, 00:20:24.838 "large_pool_count": 1024, 00:20:24.838 "small_bufsize": 8192, 00:20:24.838 "large_bufsize": 135168 00:20:24.838 } 00:20:24.838 } 00:20:24.838 ] 00:20:24.838 }, 00:20:24.838 { 00:20:24.838 "subsystem": "sock", 00:20:24.838 "config": [ 00:20:24.838 { 00:20:24.838 "method": "sock_impl_set_options", 00:20:24.838 "params": { 00:20:24.838 "impl_name": "posix", 00:20:24.838 "recv_buf_size": 2097152, 00:20:24.838 "send_buf_size": 2097152, 00:20:24.838 "enable_recv_pipe": true, 00:20:24.838 "enable_quickack": false, 00:20:24.838 "enable_placement_id": 0, 00:20:24.838 "enable_zerocopy_send_server": true, 00:20:24.838 "enable_zerocopy_send_client": false, 00:20:24.838 "zerocopy_threshold": 0, 00:20:24.838 "tls_version": 0, 00:20:24.839 "enable_ktls": false 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "sock_impl_set_options", 00:20:24.839 "params": { 00:20:24.839 "impl_name": "ssl", 00:20:24.839 "recv_buf_size": 4096, 00:20:24.839 "send_buf_size": 4096, 00:20:24.839 "enable_recv_pipe": true, 00:20:24.839 "enable_quickack": false, 00:20:24.839 "enable_placement_id": 0, 00:20:24.839 "enable_zerocopy_send_server": true, 00:20:24.839 "enable_zerocopy_send_client": false, 00:20:24.839 "zerocopy_threshold": 0, 00:20:24.839 "tls_version": 0, 00:20:24.839 "enable_ktls": false 00:20:24.839 } 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "vmd", 00:20:24.839 "config": [] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "accel", 00:20:24.839 "config": [ 00:20:24.839 { 00:20:24.839 "method": "accel_set_options", 00:20:24.839 "params": { 00:20:24.839 "small_cache_size": 128, 00:20:24.839 "large_cache_size": 16, 00:20:24.839 "task_count": 2048, 00:20:24.839 "sequence_count": 2048, 00:20:24.839 "buf_count": 2048 00:20:24.839 } 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "bdev", 00:20:24.839 "config": [ 00:20:24.839 { 00:20:24.839 "method": "bdev_set_options", 00:20:24.839 "params": { 00:20:24.839 "bdev_io_pool_size": 65535, 00:20:24.839 "bdev_io_cache_size": 256, 00:20:24.839 "bdev_auto_examine": true, 00:20:24.839 "iobuf_small_cache_size": 128, 00:20:24.839 "iobuf_large_cache_size": 16 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_raid_set_options", 00:20:24.839 "params": { 00:20:24.839 "process_window_size_kb": 1024 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_iscsi_set_options", 00:20:24.839 "params": { 00:20:24.839 "timeout_sec": 30 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_nvme_set_options", 00:20:24.839 "params": { 00:20:24.839 "action_on_timeout": "none", 00:20:24.839 "timeout_us": 0, 00:20:24.839 "timeout_admin_us": 0, 00:20:24.839 "keep_alive_timeout_ms": 10000, 00:20:24.839 "transport_retry_count": 4, 00:20:24.839 "arbitration_burst": 0, 00:20:24.839 "low_priority_weight": 0, 00:20:24.839 "medium_priority_weight": 0, 00:20:24.839 "high_priority_weight": 0, 00:20:24.839 "nvme_adminq_poll_period_us": 10000, 00:20:24.839 "nvme_ioq_poll_period_us": 0, 00:20:24.839 "io_queue_requests": 0, 00:20:24.839 "delay_cmd_submit": true, 00:20:24.839 "bdev_retry_count": 3, 00:20:24.839 "transport_ack_timeout": 0, 00:20:24.839 "ctrlr_loss_timeout_sec": 0, 00:20:24.839 "reconnect_delay_sec": 0, 00:20:24.839 "fast_io_fail_timeout_sec": 0, 00:20:24.839 "generate_uuids": false, 00:20:24.839 "transport_tos": 0, 00:20:24.839 "io_path_stat": false, 00:20:24.839 "allow_accel_sequence": false 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_nvme_set_hotplug", 00:20:24.839 "params": { 00:20:24.839 "period_us": 100000, 00:20:24.839 "enable": false 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_malloc_create", 00:20:24.839 "params": { 00:20:24.839 "name": "malloc0", 00:20:24.839 "num_blocks": 8192, 00:20:24.839 "block_size": 4096, 00:20:24.839 "physical_block_size": 4096, 00:20:24.839 "uuid": "f52c4e76-c598-4319-9f62-0be10467c5ec", 00:20:24.839 "optimal_io_boundary": 0 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "bdev_wait_for_examine" 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "nbd", 00:20:24.839 "config": [] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "scheduler", 00:20:24.839 "config": [ 00:20:24.839 { 00:20:24.839 "method": "framework_set_scheduler", 00:20:24.839 "params": { 00:20:24.839 "name": "static" 00:20:24.839 } 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "subsystem": "nvmf", 00:20:24.839 "config": [ 00:20:24.839 { 00:20:24.839 "method": "nvmf_set_config", 00:20:24.839 "params": { 00:20:24.839 "discovery_filter": "match_any", 00:20:24.839 "admin_cmd_passthru": { 00:20:24.839 "identify_ctrlr": false 00:20:24.839 } 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_set_max_subsystems", 00:20:24.839 "params": { 00:20:24.839 "max_subsystems": 1024 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_set_crdt", 00:20:24.839 "params": { 00:20:24.839 "crdt1": 0, 00:20:24.839 "crdt2": 0, 00:20:24.839 "crdt3": 0 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_create_transport", 00:20:24.839 "params": { 00:20:24.839 "trtype": "TCP", 00:20:24.839 "max_queue_depth": 128, 00:20:24.839 "max_io_qpairs_per_ctrlr": 127, 00:20:24.839 "in_capsule_data_size": 4096, 00:20:24.839 "max_io_size": 131072, 00:20:24.839 "io_unit_size": 131072, 00:20:24.839 "max_aq_depth": 128, 00:20:24.839 "num_shared_buffers": 511, 00:20:24.839 "buf_cache_size": 4294967295, 00:20:24.839 "dif_insert_or_strip": false, 00:20:24.839 "zcopy": false, 00:20:24.839 "c2h_success": false, 00:20:24.839 "sock_priority": 0, 00:20:24.839 "abort_timeout_sec": 1 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_create_subsystem", 00:20:24.839 "params": { 00:20:24.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.839 "allow_any_host": false, 00:20:24.839 "serial_number": "SPDK00000000000001", 00:20:24.839 "model_number": "SPDK bdev Controller", 00:20:24.839 "max_namespaces": 10, 00:20:24.839 "min_cntlid": 1, 00:20:24.839 "max_cntlid": 65519, 00:20:24.839 "ana_reporting": false 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_subsystem_add_host", 00:20:24.839 "params": { 00:20:24.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.839 "host": "nqn.2016-06.io.spdk:host1", 00:20:24.839 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_subsystem_add_ns", 00:20:24.839 "params": { 00:20:24.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.839 "namespace": { 00:20:24.839 "nsid": 1, 00:20:24.839 "bdev_name": "malloc0", 00:20:24.839 "nguid": "F52C4E76C59843199F620BE10467C5EC", 00:20:24.839 "uuid": "f52c4e76-c598-4319-9f62-0be10467c5ec" 00:20:24.839 } 00:20:24.839 } 00:20:24.839 }, 00:20:24.839 { 00:20:24.839 "method": "nvmf_subsystem_add_listener", 00:20:24.839 "params": { 00:20:24.839 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:24.839 "listen_address": { 00:20:24.839 "trtype": "TCP", 00:20:24.839 "adrfam": "IPv4", 00:20:24.839 "traddr": "10.0.0.2", 00:20:24.839 "trsvcid": "4420" 00:20:24.839 }, 00:20:24.839 "secure_channel": true 00:20:24.839 } 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 } 00:20:24.839 ] 00:20:24.839 }' 00:20:24.839 03:08:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:24.839 03:08:20 -- common/autotest_common.sh@10 -- # set +x 00:20:24.839 03:08:20 -- nvmf/common.sh@469 -- # nvmfpid=2033092 00:20:24.839 03:08:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:24.839 03:08:20 -- nvmf/common.sh@470 -- # waitforlisten 2033092 00:20:24.839 03:08:20 -- common/autotest_common.sh@819 -- # '[' -z 2033092 ']' 00:20:24.839 03:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.839 03:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:24.839 03:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.839 03:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:24.839 03:08:20 -- common/autotest_common.sh@10 -- # set +x 00:20:25.098 [2024-07-14 03:08:20.100329] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:25.098 [2024-07-14 03:08:20.100399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.098 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.098 [2024-07-14 03:08:20.166944] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.098 [2024-07-14 03:08:20.255254] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:25.098 [2024-07-14 03:08:20.255426] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.098 [2024-07-14 03:08:20.255445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.098 [2024-07-14 03:08:20.255459] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.098 [2024-07-14 03:08:20.255492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:25.356 [2024-07-14 03:08:20.476009] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.356 [2024-07-14 03:08:20.508026] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.356 [2024-07-14 03:08:20.508253] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.923 03:08:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:25.923 03:08:21 -- common/autotest_common.sh@852 -- # return 0 00:20:25.923 03:08:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:25.923 03:08:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:25.923 03:08:21 -- common/autotest_common.sh@10 -- # set +x 00:20:25.923 03:08:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.923 03:08:21 -- target/tls.sh@216 -- # bdevperf_pid=2033247 00:20:25.923 03:08:21 -- target/tls.sh@217 -- # waitforlisten 2033247 /var/tmp/bdevperf.sock 00:20:25.923 03:08:21 -- common/autotest_common.sh@819 -- # '[' -z 2033247 ']' 00:20:25.923 03:08:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.923 03:08:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:25.923 03:08:21 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:25.923 03:08:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.923 03:08:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:25.923 03:08:21 -- target/tls.sh@213 -- # echo '{ 00:20:25.923 "subsystems": [ 00:20:25.923 { 00:20:25.923 "subsystem": "iobuf", 00:20:25.923 "config": [ 00:20:25.923 { 00:20:25.923 "method": "iobuf_set_options", 00:20:25.923 "params": { 00:20:25.923 "small_pool_count": 8192, 00:20:25.923 "large_pool_count": 1024, 00:20:25.923 "small_bufsize": 8192, 00:20:25.923 "large_bufsize": 135168 00:20:25.923 } 00:20:25.923 } 00:20:25.923 ] 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "subsystem": "sock", 00:20:25.923 "config": [ 00:20:25.923 { 00:20:25.923 "method": "sock_impl_set_options", 00:20:25.923 "params": { 00:20:25.923 "impl_name": "posix", 00:20:25.923 "recv_buf_size": 2097152, 00:20:25.923 "send_buf_size": 2097152, 00:20:25.923 "enable_recv_pipe": true, 00:20:25.923 "enable_quickack": false, 00:20:25.923 "enable_placement_id": 0, 00:20:25.923 "enable_zerocopy_send_server": true, 00:20:25.923 "enable_zerocopy_send_client": false, 00:20:25.923 "zerocopy_threshold": 0, 00:20:25.923 "tls_version": 0, 00:20:25.923 "enable_ktls": false 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "sock_impl_set_options", 00:20:25.923 "params": { 00:20:25.923 "impl_name": "ssl", 00:20:25.923 "recv_buf_size": 4096, 00:20:25.923 "send_buf_size": 4096, 00:20:25.923 "enable_recv_pipe": true, 00:20:25.923 "enable_quickack": false, 00:20:25.923 "enable_placement_id": 0, 00:20:25.923 "enable_zerocopy_send_server": true, 00:20:25.923 "enable_zerocopy_send_client": false, 00:20:25.923 "zerocopy_threshold": 0, 00:20:25.923 "tls_version": 0, 00:20:25.923 "enable_ktls": false 00:20:25.923 } 00:20:25.923 } 00:20:25.923 ] 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "subsystem": "vmd", 00:20:25.923 "config": [] 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "subsystem": "accel", 00:20:25.923 "config": [ 00:20:25.923 { 00:20:25.923 "method": "accel_set_options", 00:20:25.923 "params": { 00:20:25.923 "small_cache_size": 128, 00:20:25.923 "large_cache_size": 16, 00:20:25.923 "task_count": 2048, 00:20:25.923 "sequence_count": 2048, 00:20:25.923 "buf_count": 2048 00:20:25.923 } 00:20:25.923 } 00:20:25.923 ] 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "subsystem": "bdev", 00:20:25.923 "config": [ 00:20:25.923 { 00:20:25.923 "method": "bdev_set_options", 00:20:25.923 "params": { 00:20:25.923 "bdev_io_pool_size": 65535, 00:20:25.923 "bdev_io_cache_size": 256, 00:20:25.923 "bdev_auto_examine": true, 00:20:25.923 "iobuf_small_cache_size": 128, 00:20:25.923 "iobuf_large_cache_size": 16 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_raid_set_options", 00:20:25.923 "params": { 00:20:25.923 "process_window_size_kb": 1024 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_iscsi_set_options", 00:20:25.923 "params": { 00:20:25.923 "timeout_sec": 30 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_nvme_set_options", 00:20:25.923 "params": { 00:20:25.923 "action_on_timeout": "none", 00:20:25.923 "timeout_us": 0, 00:20:25.923 "timeout_admin_us": 0, 00:20:25.923 "keep_alive_timeout_ms": 10000, 00:20:25.923 "transport_retry_count": 4, 00:20:25.923 "arbitration_burst": 0, 00:20:25.923 "low_priority_weight": 0, 00:20:25.923 "medium_priority_weight": 0, 00:20:25.923 "high_priority_weight": 0, 00:20:25.923 "nvme_adminq_poll_period_us": 10000, 00:20:25.923 "nvme_ioq_poll_period_us": 0, 00:20:25.923 "io_queue_requests": 512, 00:20:25.923 "delay_cmd_submit": true, 00:20:25.923 "bdev_retry_count": 3, 00:20:25.923 "transport_ack_timeout": 0, 00:20:25.923 "ctrlr_loss_timeout_sec": 0, 00:20:25.923 "reconnect_delay_sec": 0, 00:20:25.923 "fast_io_fail_timeout_sec": 0, 00:20:25.923 "generate_uuids": false, 00:20:25.923 "transport_tos": 0, 00:20:25.923 "io_path_stat": false, 00:20:25.923 "allow_accel_sequence": false 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_nvme_attach_controller", 00:20:25.923 "params": { 00:20:25.923 "name": "TLSTEST", 00:20:25.923 "trtype": "TCP", 00:20:25.923 "adrfam": "IPv4", 00:20:25.923 "traddr": "10.0.0.2", 00:20:25.923 "trsvcid": "4420", 00:20:25.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:25.923 "prchk_reftag": false, 00:20:25.923 "prchk_guard": false, 00:20:25.923 "ctrlr_loss_timeout_sec": 0, 00:20:25.923 "reconnect_delay_sec": 0, 00:20:25.923 "fast_io_fail_timeout_sec": 0, 00:20:25.923 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:25.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:25.923 "hdgst": false, 00:20:25.923 "ddgst": false 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_nvme_set_hotplug", 00:20:25.923 "params": { 00:20:25.923 "period_us": 100000, 00:20:25.923 "enable": false 00:20:25.923 } 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "method": "bdev_wait_for_examine" 00:20:25.923 } 00:20:25.923 ] 00:20:25.923 }, 00:20:25.923 { 00:20:25.923 "subsystem": "nbd", 00:20:25.923 "config": [] 00:20:25.923 } 00:20:25.923 ] 00:20:25.923 }' 00:20:25.923 03:08:21 -- common/autotest_common.sh@10 -- # set +x 00:20:25.923 [2024-07-14 03:08:21.096399] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:25.923 [2024-07-14 03:08:21.096480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2033247 ] 00:20:25.923 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.923 [2024-07-14 03:08:21.153572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.182 [2024-07-14 03:08:21.235625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.182 [2024-07-14 03:08:21.393565] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.114 03:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:27.114 03:08:22 -- common/autotest_common.sh@852 -- # return 0 00:20:27.114 03:08:22 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:27.114 Running I/O for 10 seconds... 00:20:37.083 00:20:37.083 Latency(us) 00:20:37.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.083 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.083 Verification LBA range: start 0x0 length 0x2000 00:20:37.083 TLSTESTn1 : 10.03 1992.97 7.79 0.00 0.00 64128.95 11553.75 67574.90 00:20:37.083 =================================================================================================================== 00:20:37.083 Total : 1992.97 7.79 0.00 0.00 64128.95 11553.75 67574.90 00:20:37.083 0 00:20:37.083 03:08:32 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.083 03:08:32 -- target/tls.sh@223 -- # killprocess 2033247 00:20:37.083 03:08:32 -- common/autotest_common.sh@926 -- # '[' -z 2033247 ']' 00:20:37.083 03:08:32 -- common/autotest_common.sh@930 -- # kill -0 2033247 00:20:37.083 03:08:32 -- common/autotest_common.sh@931 -- # uname 00:20:37.083 03:08:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.083 03:08:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2033247 00:20:37.083 03:08:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.083 03:08:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.083 03:08:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2033247' 00:20:37.083 killing process with pid 2033247 00:20:37.083 03:08:32 -- common/autotest_common.sh@945 -- # kill 2033247 00:20:37.083 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.083 00:20:37.083 Latency(us) 00:20:37.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.083 =================================================================================================================== 00:20:37.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.083 03:08:32 -- common/autotest_common.sh@950 -- # wait 2033247 00:20:37.340 03:08:32 -- target/tls.sh@224 -- # killprocess 2033092 00:20:37.340 03:08:32 -- common/autotest_common.sh@926 -- # '[' -z 2033092 ']' 00:20:37.340 03:08:32 -- common/autotest_common.sh@930 -- # kill -0 2033092 00:20:37.340 03:08:32 -- common/autotest_common.sh@931 -- # uname 00:20:37.340 03:08:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.340 03:08:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2033092 00:20:37.340 03:08:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:37.340 03:08:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:37.340 03:08:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2033092' 00:20:37.340 killing process with pid 2033092 00:20:37.340 03:08:32 -- common/autotest_common.sh@945 -- # kill 2033092 00:20:37.340 03:08:32 -- common/autotest_common.sh@950 -- # wait 2033092 00:20:37.598 03:08:32 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:37.598 03:08:32 -- target/tls.sh@227 -- # cleanup 00:20:37.598 03:08:32 -- target/tls.sh@15 -- # process_shm --id 0 00:20:37.598 03:08:32 -- common/autotest_common.sh@796 -- # type=--id 00:20:37.598 03:08:32 -- common/autotest_common.sh@797 -- # id=0 00:20:37.598 03:08:32 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:37.598 03:08:32 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:37.598 03:08:32 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:37.598 03:08:32 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:37.598 03:08:32 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:37.598 03:08:32 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:37.598 nvmf_trace.0 00:20:37.598 03:08:32 -- common/autotest_common.sh@811 -- # return 0 00:20:37.598 03:08:32 -- target/tls.sh@16 -- # killprocess 2033247 00:20:37.598 03:08:32 -- common/autotest_common.sh@926 -- # '[' -z 2033247 ']' 00:20:37.598 03:08:32 -- common/autotest_common.sh@930 -- # kill -0 2033247 00:20:37.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2033247) - No such process 00:20:37.598 03:08:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2033247 is not found' 00:20:37.598 Process with pid 2033247 is not found 00:20:37.598 03:08:32 -- target/tls.sh@17 -- # nvmftestfini 00:20:37.598 03:08:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:37.598 03:08:32 -- nvmf/common.sh@116 -- # sync 00:20:37.598 03:08:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:37.598 03:08:32 -- nvmf/common.sh@119 -- # set +e 00:20:37.598 03:08:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:37.598 03:08:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:37.598 rmmod nvme_tcp 00:20:37.598 rmmod nvme_fabrics 00:20:37.598 rmmod nvme_keyring 00:20:37.598 03:08:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:37.598 03:08:32 -- nvmf/common.sh@123 -- # set -e 00:20:37.598 03:08:32 -- nvmf/common.sh@124 -- # return 0 00:20:37.598 03:08:32 -- nvmf/common.sh@477 -- # '[' -n 2033092 ']' 00:20:37.598 03:08:32 -- nvmf/common.sh@478 -- # killprocess 2033092 00:20:37.598 03:08:32 -- common/autotest_common.sh@926 -- # '[' -z 2033092 ']' 00:20:37.598 03:08:32 -- common/autotest_common.sh@930 -- # kill -0 2033092 00:20:37.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2033092) - No such process 00:20:37.598 03:08:32 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2033092 is not found' 00:20:37.598 Process with pid 2033092 is not found 00:20:37.598 03:08:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:37.598 03:08:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:37.598 03:08:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:37.598 03:08:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:37.598 03:08:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:37.598 03:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:37.598 03:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:37.598 03:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.192 03:08:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:40.192 03:08:34 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.192 00:20:40.192 real 1m13.813s 00:20:40.192 user 1m56.350s 00:20:40.192 sys 0m26.150s 00:20:40.192 03:08:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.192 03:08:34 -- common/autotest_common.sh@10 -- # set +x 00:20:40.192 ************************************ 00:20:40.192 END TEST nvmf_tls 00:20:40.192 ************************************ 00:20:40.192 03:08:34 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:40.192 03:08:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:40.192 03:08:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:40.192 03:08:34 -- common/autotest_common.sh@10 -- # set +x 00:20:40.192 ************************************ 00:20:40.192 START TEST nvmf_fips 00:20:40.192 ************************************ 00:20:40.192 03:08:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:40.192 * Looking for test storage... 00:20:40.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:40.192 03:08:34 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.192 03:08:34 -- nvmf/common.sh@7 -- # uname -s 00:20:40.192 03:08:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.192 03:08:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.192 03:08:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.192 03:08:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.192 03:08:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.192 03:08:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.192 03:08:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.192 03:08:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.192 03:08:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.192 03:08:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.192 03:08:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.192 03:08:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.192 03:08:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.192 03:08:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.192 03:08:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.192 03:08:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.192 03:08:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.192 03:08:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.192 03:08:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.192 03:08:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.192 03:08:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.192 03:08:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.192 03:08:34 -- paths/export.sh@5 -- # export PATH 00:20:40.192 03:08:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.192 03:08:34 -- nvmf/common.sh@46 -- # : 0 00:20:40.192 03:08:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:40.192 03:08:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:40.192 03:08:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:40.192 03:08:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.192 03:08:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.192 03:08:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:40.192 03:08:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:40.192 03:08:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:40.192 03:08:34 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:40.192 03:08:34 -- fips/fips.sh@89 -- # check_openssl_version 00:20:40.192 03:08:34 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:40.192 03:08:34 -- fips/fips.sh@85 -- # openssl version 00:20:40.192 03:08:34 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:40.192 03:08:34 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:40.192 03:08:34 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:40.192 03:08:34 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:40.192 03:08:34 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:40.192 03:08:34 -- scripts/common.sh@335 -- # IFS=.-: 00:20:40.192 03:08:34 -- scripts/common.sh@335 -- # read -ra ver1 00:20:40.192 03:08:34 -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.192 03:08:34 -- scripts/common.sh@336 -- # read -ra ver2 00:20:40.192 03:08:34 -- scripts/common.sh@337 -- # local 'op=>=' 00:20:40.192 03:08:34 -- scripts/common.sh@339 -- # ver1_l=3 00:20:40.192 03:08:34 -- scripts/common.sh@340 -- # ver2_l=3 00:20:40.192 03:08:34 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:40.192 03:08:34 -- scripts/common.sh@343 -- # case "$op" in 00:20:40.192 03:08:34 -- scripts/common.sh@347 -- # : 1 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # decimal 3 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=3 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 3 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # ver1[v]=3 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # decimal 3 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=3 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 3 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # ver2[v]=3 00:20:40.192 03:08:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.192 03:08:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v++ )) 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # decimal 0 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=0 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 0 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # ver1[v]=0 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # decimal 0 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=0 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 0 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:40.192 03:08:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.192 03:08:34 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v++ )) 00:20:40.192 03:08:34 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # decimal 9 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=9 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 9 00:20:40.192 03:08:34 -- scripts/common.sh@364 -- # ver1[v]=9 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # decimal 0 00:20:40.192 03:08:34 -- scripts/common.sh@352 -- # local d=0 00:20:40.192 03:08:34 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:40.192 03:08:34 -- scripts/common.sh@354 -- # echo 0 00:20:40.192 03:08:34 -- scripts/common.sh@365 -- # ver2[v]=0 00:20:40.192 03:08:34 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:40.192 03:08:34 -- scripts/common.sh@366 -- # return 0 00:20:40.192 03:08:34 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:40.192 03:08:34 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:40.192 03:08:34 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:40.193 03:08:34 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:40.193 03:08:34 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:40.193 03:08:34 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:40.193 03:08:34 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:40.193 03:08:34 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:40.193 03:08:34 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:20:40.193 03:08:34 -- fips/fips.sh@114 -- # build_openssl_config 00:20:40.193 03:08:34 -- fips/fips.sh@37 -- # cat 00:20:40.193 03:08:34 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:40.193 03:08:34 -- fips/fips.sh@58 -- # cat - 00:20:40.193 03:08:34 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:40.193 03:08:34 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:40.193 03:08:34 -- fips/fips.sh@117 -- # mapfile -t providers 00:20:40.193 03:08:34 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:20:40.193 03:08:34 -- fips/fips.sh@117 -- # openssl list -providers 00:20:40.193 03:08:34 -- fips/fips.sh@117 -- # grep name 00:20:40.193 03:08:35 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:40.193 03:08:35 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:40.193 03:08:35 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:40.193 03:08:35 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:40.193 03:08:35 -- fips/fips.sh@128 -- # : 00:20:40.193 03:08:35 -- common/autotest_common.sh@640 -- # local es=0 00:20:40.193 03:08:35 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:40.193 03:08:35 -- common/autotest_common.sh@628 -- # local arg=openssl 00:20:40.193 03:08:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:40.193 03:08:35 -- common/autotest_common.sh@632 -- # type -t openssl 00:20:40.193 03:08:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:40.193 03:08:35 -- common/autotest_common.sh@634 -- # type -P openssl 00:20:40.193 03:08:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:40.193 03:08:35 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:20:40.193 03:08:35 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:20:40.193 03:08:35 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:20:40.193 Error setting digest 00:20:40.193 0032A55E867F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:40.193 0032A55E867F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:40.193 03:08:35 -- common/autotest_common.sh@643 -- # es=1 00:20:40.193 03:08:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:40.193 03:08:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:40.193 03:08:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:40.193 03:08:35 -- fips/fips.sh@131 -- # nvmftestinit 00:20:40.193 03:08:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:40.193 03:08:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.193 03:08:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:40.193 03:08:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:40.193 03:08:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:40.193 03:08:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.193 03:08:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.193 03:08:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.193 03:08:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:40.193 03:08:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:40.193 03:08:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:40.193 03:08:35 -- common/autotest_common.sh@10 -- # set +x 00:20:42.094 03:08:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:42.094 03:08:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:42.094 03:08:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:42.094 03:08:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:42.094 03:08:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:42.094 03:08:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:42.094 03:08:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:42.094 03:08:37 -- nvmf/common.sh@294 -- # net_devs=() 00:20:42.094 03:08:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:42.094 03:08:37 -- nvmf/common.sh@295 -- # e810=() 00:20:42.094 03:08:37 -- nvmf/common.sh@295 -- # local -ga e810 00:20:42.094 03:08:37 -- nvmf/common.sh@296 -- # x722=() 00:20:42.094 03:08:37 -- nvmf/common.sh@296 -- # local -ga x722 00:20:42.094 03:08:37 -- nvmf/common.sh@297 -- # mlx=() 00:20:42.094 03:08:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:42.094 03:08:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.094 03:08:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:42.094 03:08:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:42.094 03:08:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:42.094 03:08:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:42.094 03:08:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:42.094 03:08:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:42.094 03:08:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.094 03:08:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:42.094 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:42.094 03:08:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.094 03:08:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:42.095 03:08:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:42.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:42.095 03:08:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:42.095 03:08:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.095 03:08:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.095 03:08:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.095 03:08:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.095 03:08:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:42.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:42.095 03:08:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.095 03:08:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:42.095 03:08:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.095 03:08:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:42.095 03:08:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.095 03:08:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:42.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:42.095 03:08:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.095 03:08:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:42.095 03:08:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:42.095 03:08:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:42.095 03:08:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.095 03:08:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.095 03:08:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.095 03:08:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:42.095 03:08:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.095 03:08:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.095 03:08:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:42.095 03:08:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.095 03:08:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.095 03:08:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:42.095 03:08:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:42.095 03:08:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.095 03:08:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.095 03:08:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.095 03:08:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.095 03:08:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:42.095 03:08:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.095 03:08:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.095 03:08:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.095 03:08:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:42.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:20:42.095 00:20:42.095 --- 10.0.0.2 ping statistics --- 00:20:42.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.095 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:20:42.095 03:08:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:20:42.095 00:20:42.095 --- 10.0.0.1 ping statistics --- 00:20:42.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.095 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:20:42.095 03:08:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.095 03:08:37 -- nvmf/common.sh@410 -- # return 0 00:20:42.095 03:08:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:42.095 03:08:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.095 03:08:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:42.095 03:08:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.095 03:08:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:42.095 03:08:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:42.095 03:08:37 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:42.095 03:08:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:42.095 03:08:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:42.095 03:08:37 -- common/autotest_common.sh@10 -- # set +x 00:20:42.095 03:08:37 -- nvmf/common.sh@469 -- # nvmfpid=2036598 00:20:42.095 03:08:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.095 03:08:37 -- nvmf/common.sh@470 -- # waitforlisten 2036598 00:20:42.095 03:08:37 -- common/autotest_common.sh@819 -- # '[' -z 2036598 ']' 00:20:42.095 03:08:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.095 03:08:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:42.095 03:08:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.095 03:08:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:42.095 03:08:37 -- common/autotest_common.sh@10 -- # set +x 00:20:42.095 [2024-07-14 03:08:37.288173] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:42.095 [2024-07-14 03:08:37.288273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.095 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.354 [2024-07-14 03:08:37.356900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.354 [2024-07-14 03:08:37.443891] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:42.354 [2024-07-14 03:08:37.444083] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.354 [2024-07-14 03:08:37.444103] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.354 [2024-07-14 03:08:37.444117] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.354 [2024-07-14 03:08:37.444161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.921 03:08:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:42.921 03:08:38 -- common/autotest_common.sh@852 -- # return 0 00:20:42.921 03:08:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:42.921 03:08:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:42.921 03:08:38 -- common/autotest_common.sh@10 -- # set +x 00:20:43.179 03:08:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.179 03:08:38 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:43.179 03:08:38 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.179 03:08:38 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.179 03:08:38 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:43.179 03:08:38 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.179 03:08:38 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.179 03:08:38 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:43.179 03:08:38 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.179 [2024-07-14 03:08:38.402425] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.179 [2024-07-14 03:08:38.418425] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.179 [2024-07-14 03:08:38.418625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.438 malloc0 00:20:43.438 03:08:38 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:43.438 03:08:38 -- fips/fips.sh@148 -- # bdevperf_pid=2036757 00:20:43.438 03:08:38 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:43.438 03:08:38 -- fips/fips.sh@149 -- # waitforlisten 2036757 /var/tmp/bdevperf.sock 00:20:43.438 03:08:38 -- common/autotest_common.sh@819 -- # '[' -z 2036757 ']' 00:20:43.438 03:08:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:43.438 03:08:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:43.438 03:08:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:43.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:43.438 03:08:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:43.438 03:08:38 -- common/autotest_common.sh@10 -- # set +x 00:20:43.438 [2024-07-14 03:08:38.539321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:43.438 [2024-07-14 03:08:38.539420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036757 ] 00:20:43.438 EAL: No free 2048 kB hugepages reported on node 1 00:20:43.438 [2024-07-14 03:08:38.596905] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:43.438 [2024-07-14 03:08:38.677969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.372 03:08:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:44.372 03:08:39 -- common/autotest_common.sh@852 -- # return 0 00:20:44.372 03:08:39 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:44.630 [2024-07-14 03:08:39.691710] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:44.630 TLSTESTn1 00:20:44.630 03:08:39 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:44.889 Running I/O for 10 seconds... 00:20:54.857 00:20:54.857 Latency(us) 00:20:54.857 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.857 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:54.857 Verification LBA range: start 0x0 length 0x2000 00:20:54.857 TLSTESTn1 : 10.05 1466.98 5.73 0.00 0.00 87076.58 8107.05 91264.95 00:20:54.857 =================================================================================================================== 00:20:54.857 Total : 1466.98 5.73 0.00 0.00 87076.58 8107.05 91264.95 00:20:54.857 0 00:20:54.857 03:08:49 -- fips/fips.sh@1 -- # cleanup 00:20:54.857 03:08:49 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:54.858 03:08:49 -- common/autotest_common.sh@796 -- # type=--id 00:20:54.858 03:08:49 -- common/autotest_common.sh@797 -- # id=0 00:20:54.858 03:08:49 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:54.858 03:08:49 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:54.858 03:08:49 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:54.858 03:08:49 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:54.858 03:08:49 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:54.858 03:08:49 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:54.858 nvmf_trace.0 00:20:54.858 03:08:50 -- common/autotest_common.sh@811 -- # return 0 00:20:54.858 03:08:50 -- fips/fips.sh@16 -- # killprocess 2036757 00:20:54.858 03:08:50 -- common/autotest_common.sh@926 -- # '[' -z 2036757 ']' 00:20:54.858 03:08:50 -- common/autotest_common.sh@930 -- # kill -0 2036757 00:20:54.858 03:08:50 -- common/autotest_common.sh@931 -- # uname 00:20:54.858 03:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.858 03:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2036757 00:20:54.858 03:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:54.858 03:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:54.858 03:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2036757' 00:20:54.858 killing process with pid 2036757 00:20:54.858 03:08:50 -- common/autotest_common.sh@945 -- # kill 2036757 00:20:54.858 Received shutdown signal, test time was about 10.000000 seconds 00:20:54.858 00:20:54.858 Latency(us) 00:20:54.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:54.858 =================================================================================================================== 00:20:54.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:54.858 03:08:50 -- common/autotest_common.sh@950 -- # wait 2036757 00:20:55.115 03:08:50 -- fips/fips.sh@17 -- # nvmftestfini 00:20:55.115 03:08:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:55.115 03:08:50 -- nvmf/common.sh@116 -- # sync 00:20:55.115 03:08:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:55.115 03:08:50 -- nvmf/common.sh@119 -- # set +e 00:20:55.115 03:08:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:55.115 03:08:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:55.115 rmmod nvme_tcp 00:20:55.115 rmmod nvme_fabrics 00:20:55.115 rmmod nvme_keyring 00:20:55.115 03:08:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:55.115 03:08:50 -- nvmf/common.sh@123 -- # set -e 00:20:55.115 03:08:50 -- nvmf/common.sh@124 -- # return 0 00:20:55.115 03:08:50 -- nvmf/common.sh@477 -- # '[' -n 2036598 ']' 00:20:55.115 03:08:50 -- nvmf/common.sh@478 -- # killprocess 2036598 00:20:55.115 03:08:50 -- common/autotest_common.sh@926 -- # '[' -z 2036598 ']' 00:20:55.115 03:08:50 -- common/autotest_common.sh@930 -- # kill -0 2036598 00:20:55.115 03:08:50 -- common/autotest_common.sh@931 -- # uname 00:20:55.373 03:08:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:55.373 03:08:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2036598 00:20:55.373 03:08:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:55.373 03:08:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:55.373 03:08:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2036598' 00:20:55.373 killing process with pid 2036598 00:20:55.373 03:08:50 -- common/autotest_common.sh@945 -- # kill 2036598 00:20:55.373 03:08:50 -- common/autotest_common.sh@950 -- # wait 2036598 00:20:55.630 03:08:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:55.630 03:08:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:55.630 03:08:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:55.630 03:08:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:55.630 03:08:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:55.630 03:08:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.630 03:08:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:55.630 03:08:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.532 03:08:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:57.532 03:08:52 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:57.532 00:20:57.532 real 0m17.786s 00:20:57.532 user 0m22.682s 00:20:57.532 sys 0m6.327s 00:20:57.532 03:08:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:57.532 03:08:52 -- common/autotest_common.sh@10 -- # set +x 00:20:57.532 ************************************ 00:20:57.532 END TEST nvmf_fips 00:20:57.532 ************************************ 00:20:57.532 03:08:52 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:57.532 03:08:52 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:57.532 03:08:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:57.532 03:08:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:57.532 03:08:52 -- common/autotest_common.sh@10 -- # set +x 00:20:57.532 ************************************ 00:20:57.532 START TEST nvmf_fuzz 00:20:57.532 ************************************ 00:20:57.532 03:08:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:20:57.532 * Looking for test storage... 00:20:57.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:57.532 03:08:52 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.532 03:08:52 -- nvmf/common.sh@7 -- # uname -s 00:20:57.532 03:08:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.532 03:08:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.532 03:08:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.532 03:08:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.532 03:08:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.532 03:08:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.532 03:08:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.532 03:08:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.532 03:08:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.532 03:08:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.532 03:08:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.532 03:08:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.532 03:08:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.532 03:08:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.532 03:08:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.532 03:08:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.532 03:08:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.532 03:08:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.532 03:08:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.532 03:08:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.532 03:08:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.532 03:08:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.532 03:08:52 -- paths/export.sh@5 -- # export PATH 00:20:57.532 03:08:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.532 03:08:52 -- nvmf/common.sh@46 -- # : 0 00:20:57.532 03:08:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:57.532 03:08:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:57.532 03:08:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:57.532 03:08:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.532 03:08:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.532 03:08:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:57.532 03:08:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:57.532 03:08:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:57.532 03:08:52 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:57.532 03:08:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:57.532 03:08:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.532 03:08:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:57.532 03:08:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:57.532 03:08:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:57.532 03:08:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.532 03:08:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.532 03:08:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.532 03:08:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:57.532 03:08:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:57.532 03:08:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:57.532 03:08:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.061 03:08:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:00.061 03:08:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:00.061 03:08:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:00.061 03:08:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:00.061 03:08:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:00.061 03:08:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:00.061 03:08:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:00.061 03:08:54 -- nvmf/common.sh@294 -- # net_devs=() 00:21:00.061 03:08:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:00.061 03:08:54 -- nvmf/common.sh@295 -- # e810=() 00:21:00.061 03:08:54 -- nvmf/common.sh@295 -- # local -ga e810 00:21:00.061 03:08:54 -- nvmf/common.sh@296 -- # x722=() 00:21:00.061 03:08:54 -- nvmf/common.sh@296 -- # local -ga x722 00:21:00.061 03:08:54 -- nvmf/common.sh@297 -- # mlx=() 00:21:00.061 03:08:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:00.061 03:08:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.061 03:08:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:00.061 03:08:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:00.061 03:08:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:00.061 03:08:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:00.061 03:08:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:00.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:00.061 03:08:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:00.061 03:08:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:00.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:00.061 03:08:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.061 03:08:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:00.062 03:08:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:00.062 03:08:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.062 03:08:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:00.062 03:08:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.062 03:08:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:00.062 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:00.062 03:08:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.062 03:08:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:00.062 03:08:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.062 03:08:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:00.062 03:08:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.062 03:08:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:00.062 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:00.062 03:08:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.062 03:08:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:00.062 03:08:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:00.062 03:08:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:00.062 03:08:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.062 03:08:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.062 03:08:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.062 03:08:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:00.062 03:08:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.062 03:08:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.062 03:08:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:00.062 03:08:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.062 03:08:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.062 03:08:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:00.062 03:08:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:00.062 03:08:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.062 03:08:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.062 03:08:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.062 03:08:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.062 03:08:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:00.062 03:08:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.062 03:08:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.062 03:08:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.062 03:08:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:00.062 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.062 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:21:00.062 00:21:00.062 --- 10.0.0.2 ping statistics --- 00:21:00.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.062 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:21:00.062 03:08:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.062 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.062 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:21:00.062 00:21:00.062 --- 10.0.0.1 ping statistics --- 00:21:00.062 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.062 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:00.062 03:08:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.062 03:08:54 -- nvmf/common.sh@410 -- # return 0 00:21:00.062 03:08:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:00.062 03:08:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.062 03:08:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:00.062 03:08:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.062 03:08:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:00.062 03:08:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:00.062 03:08:54 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2040189 00:21:00.062 03:08:54 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:00.062 03:08:54 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:00.062 03:08:54 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2040189 00:21:00.062 03:08:54 -- common/autotest_common.sh@819 -- # '[' -z 2040189 ']' 00:21:00.062 03:08:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.062 03:08:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:00.062 03:08:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.062 03:08:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:00.062 03:08:54 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 03:08:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:00.995 03:08:55 -- common/autotest_common.sh@852 -- # return 0 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:00.995 03:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.995 03:08:55 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 03:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:00.995 03:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.995 03:08:55 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 Malloc0 00:21:00.995 03:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:00.995 03:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.995 03:08:55 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 03:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:00.995 03:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.995 03:08:55 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 03:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:00.995 03:08:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:00.995 03:08:55 -- common/autotest_common.sh@10 -- # set +x 00:21:00.995 03:08:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:00.995 03:08:55 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:33.094 Fuzzing completed. Shutting down the fuzz application 00:21:33.094 00:21:33.094 Dumping successful admin opcodes: 00:21:33.094 8, 9, 10, 24, 00:21:33.094 Dumping successful io opcodes: 00:21:33.094 0, 9, 00:21:33.094 NS: 0x200003aeff00 I/O qp, Total commands completed: 445323, total successful commands: 2587, random_seed: 410339072 00:21:33.094 NS: 0x200003aeff00 admin qp, Total commands completed: 55664, total successful commands: 443, random_seed: 2909265408 00:21:33.094 03:09:26 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:33.094 Fuzzing completed. Shutting down the fuzz application 00:21:33.094 00:21:33.094 Dumping successful admin opcodes: 00:21:33.094 24, 00:21:33.094 Dumping successful io opcodes: 00:21:33.094 00:21:33.094 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 982103340 00:21:33.094 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 982224099 00:21:33.094 03:09:27 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:33.094 03:09:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:33.094 03:09:27 -- common/autotest_common.sh@10 -- # set +x 00:21:33.094 03:09:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:33.094 03:09:27 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:33.094 03:09:27 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:33.094 03:09:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:33.094 03:09:27 -- nvmf/common.sh@116 -- # sync 00:21:33.094 03:09:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:33.094 03:09:27 -- nvmf/common.sh@119 -- # set +e 00:21:33.094 03:09:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:33.094 03:09:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:33.094 rmmod nvme_tcp 00:21:33.094 rmmod nvme_fabrics 00:21:33.094 rmmod nvme_keyring 00:21:33.094 03:09:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:33.094 03:09:27 -- nvmf/common.sh@123 -- # set -e 00:21:33.094 03:09:27 -- nvmf/common.sh@124 -- # return 0 00:21:33.094 03:09:27 -- nvmf/common.sh@477 -- # '[' -n 2040189 ']' 00:21:33.094 03:09:27 -- nvmf/common.sh@478 -- # killprocess 2040189 00:21:33.094 03:09:27 -- common/autotest_common.sh@926 -- # '[' -z 2040189 ']' 00:21:33.094 03:09:27 -- common/autotest_common.sh@930 -- # kill -0 2040189 00:21:33.094 03:09:27 -- common/autotest_common.sh@931 -- # uname 00:21:33.094 03:09:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:33.094 03:09:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2040189 00:21:33.094 03:09:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:33.094 03:09:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:33.094 03:09:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2040189' 00:21:33.094 killing process with pid 2040189 00:21:33.094 03:09:27 -- common/autotest_common.sh@945 -- # kill 2040189 00:21:33.094 03:09:27 -- common/autotest_common.sh@950 -- # wait 2040189 00:21:33.094 03:09:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:33.094 03:09:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:33.094 03:09:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:33.094 03:09:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.094 03:09:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:33.094 03:09:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.094 03:09:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.094 03:09:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.994 03:09:30 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:34.994 03:09:30 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:34.994 00:21:34.994 real 0m37.492s 00:21:34.994 user 0m51.435s 00:21:34.994 sys 0m15.326s 00:21:34.994 03:09:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.994 03:09:30 -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 ************************************ 00:21:34.994 END TEST nvmf_fuzz 00:21:34.994 ************************************ 00:21:34.994 03:09:30 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:34.994 03:09:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:34.994 03:09:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:34.994 03:09:30 -- common/autotest_common.sh@10 -- # set +x 00:21:34.994 ************************************ 00:21:34.994 START TEST nvmf_multiconnection 00:21:34.994 ************************************ 00:21:34.994 03:09:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:35.252 * Looking for test storage... 00:21:35.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:35.252 03:09:30 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.252 03:09:30 -- nvmf/common.sh@7 -- # uname -s 00:21:35.252 03:09:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.252 03:09:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.252 03:09:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.252 03:09:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.252 03:09:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.252 03:09:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.252 03:09:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.252 03:09:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.252 03:09:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.252 03:09:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.252 03:09:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.252 03:09:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.252 03:09:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.252 03:09:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.252 03:09:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.252 03:09:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.252 03:09:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.252 03:09:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.252 03:09:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.252 03:09:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.252 03:09:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.252 03:09:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.252 03:09:30 -- paths/export.sh@5 -- # export PATH 00:21:35.252 03:09:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.252 03:09:30 -- nvmf/common.sh@46 -- # : 0 00:21:35.252 03:09:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:35.252 03:09:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:35.252 03:09:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:35.252 03:09:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.252 03:09:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.252 03:09:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:35.252 03:09:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:35.252 03:09:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:35.252 03:09:30 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.252 03:09:30 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.252 03:09:30 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:35.252 03:09:30 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:35.252 03:09:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:35.252 03:09:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.252 03:09:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:35.252 03:09:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:35.252 03:09:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:35.252 03:09:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.252 03:09:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.252 03:09:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.252 03:09:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:35.252 03:09:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:35.252 03:09:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:35.252 03:09:30 -- common/autotest_common.sh@10 -- # set +x 00:21:37.149 03:09:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:37.149 03:09:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:37.149 03:09:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:37.149 03:09:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:37.149 03:09:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:37.149 03:09:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:37.149 03:09:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:37.149 03:09:32 -- nvmf/common.sh@294 -- # net_devs=() 00:21:37.149 03:09:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:37.149 03:09:32 -- nvmf/common.sh@295 -- # e810=() 00:21:37.149 03:09:32 -- nvmf/common.sh@295 -- # local -ga e810 00:21:37.149 03:09:32 -- nvmf/common.sh@296 -- # x722=() 00:21:37.149 03:09:32 -- nvmf/common.sh@296 -- # local -ga x722 00:21:37.149 03:09:32 -- nvmf/common.sh@297 -- # mlx=() 00:21:37.149 03:09:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:37.149 03:09:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:37.149 03:09:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:37.149 03:09:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:37.149 03:09:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:37.149 03:09:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:37.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:37.149 03:09:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:37.149 03:09:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:37.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:37.149 03:09:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:37.149 03:09:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.149 03:09:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.149 03:09:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:37.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:37.149 03:09:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.149 03:09:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:37.149 03:09:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:37.149 03:09:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:37.149 03:09:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:37.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:37.149 03:09:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:37.149 03:09:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:37.149 03:09:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:37.149 03:09:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:37.149 03:09:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:37.149 03:09:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:37.149 03:09:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:37.149 03:09:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:37.149 03:09:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:37.149 03:09:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:37.149 03:09:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:37.149 03:09:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:37.149 03:09:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:37.149 03:09:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:37.149 03:09:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:37.149 03:09:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:37.149 03:09:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:37.149 03:09:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:37.149 03:09:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:37.149 03:09:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:37.149 03:09:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:37.149 03:09:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:37.149 03:09:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:37.149 03:09:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:37.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:37.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:21:37.149 00:21:37.150 --- 10.0.0.2 ping statistics --- 00:21:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.150 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:21:37.150 03:09:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:37.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:37.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:21:37.150 00:21:37.150 --- 10.0.0.1 ping statistics --- 00:21:37.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:37.150 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:21:37.150 03:09:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:37.150 03:09:32 -- nvmf/common.sh@410 -- # return 0 00:21:37.150 03:09:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:37.150 03:09:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:37.150 03:09:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:37.150 03:09:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:37.150 03:09:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:37.150 03:09:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:37.150 03:09:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:37.150 03:09:32 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:37.150 03:09:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:37.150 03:09:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:37.150 03:09:32 -- common/autotest_common.sh@10 -- # set +x 00:21:37.150 03:09:32 -- nvmf/common.sh@469 -- # nvmfpid=2046569 00:21:37.150 03:09:32 -- nvmf/common.sh@470 -- # waitforlisten 2046569 00:21:37.150 03:09:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:37.150 03:09:32 -- common/autotest_common.sh@819 -- # '[' -z 2046569 ']' 00:21:37.150 03:09:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.150 03:09:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:37.150 03:09:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.150 03:09:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:37.150 03:09:32 -- common/autotest_common.sh@10 -- # set +x 00:21:37.408 [2024-07-14 03:09:32.405031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:21:37.408 [2024-07-14 03:09:32.405115] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.408 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.408 [2024-07-14 03:09:32.478003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:37.408 [2024-07-14 03:09:32.567258] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:37.408 [2024-07-14 03:09:32.567422] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.408 [2024-07-14 03:09:32.567440] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.408 [2024-07-14 03:09:32.567452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.408 [2024-07-14 03:09:32.567504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.408 [2024-07-14 03:09:32.567564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:37.408 [2024-07-14 03:09:32.567629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:37.408 [2024-07-14 03:09:32.567631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.341 03:09:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:38.341 03:09:33 -- common/autotest_common.sh@852 -- # return 0 00:21:38.341 03:09:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:38.341 03:09:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 03:09:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.341 03:09:33 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 [2024-07-14 03:09:33.362400] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:38.341 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.341 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 Malloc1 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 [2024-07-14 03:09:33.417367] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.341 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 Malloc2 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.341 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.341 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:38.341 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.341 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.342 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 Malloc3 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.342 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 Malloc4 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.342 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 Malloc5 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.342 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.342 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:38.342 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.342 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.600 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 Malloc6 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.600 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 Malloc7 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.600 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 Malloc8 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.600 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 Malloc9 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.600 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:38.600 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.600 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.600 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.601 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:38.601 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.601 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.601 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.601 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:21:38.601 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.601 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.601 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.601 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.601 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:38.601 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.601 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.601 Malloc10 00:21:38.601 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.601 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:38.601 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.601 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.601 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.601 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:38.601 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.601 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:21:38.859 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.859 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.859 03:09:33 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:38.859 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.859 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 Malloc11 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:38.859 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.859 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:38.859 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.859 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:21:38.859 03:09:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:38.859 03:09:33 -- common/autotest_common.sh@10 -- # set +x 00:21:38.859 03:09:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:38.859 03:09:33 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:38.859 03:09:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:38.859 03:09:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:39.424 03:09:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:39.424 03:09:34 -- common/autotest_common.sh@1177 -- # local i=0 00:21:39.424 03:09:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:39.424 03:09:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:39.424 03:09:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:41.949 03:09:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:41.949 03:09:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:41.949 03:09:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:21:41.949 03:09:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:41.949 03:09:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:41.949 03:09:36 -- common/autotest_common.sh@1187 -- # return 0 00:21:41.949 03:09:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:41.949 03:09:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:21:42.208 03:09:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:42.208 03:09:37 -- common/autotest_common.sh@1177 -- # local i=0 00:21:42.208 03:09:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:42.208 03:09:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:42.208 03:09:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:44.133 03:09:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:44.134 03:09:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:44.134 03:09:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:21:44.134 03:09:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:44.134 03:09:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:44.134 03:09:39 -- common/autotest_common.sh@1187 -- # return 0 00:21:44.134 03:09:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:44.134 03:09:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:21:45.073 03:09:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:45.073 03:09:40 -- common/autotest_common.sh@1177 -- # local i=0 00:21:45.073 03:09:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:45.073 03:09:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:45.073 03:09:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:46.972 03:09:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:46.972 03:09:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:46.972 03:09:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:21:46.972 03:09:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:46.972 03:09:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:46.972 03:09:42 -- common/autotest_common.sh@1187 -- # return 0 00:21:46.972 03:09:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.972 03:09:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:21:47.907 03:09:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:47.907 03:09:42 -- common/autotest_common.sh@1177 -- # local i=0 00:21:47.907 03:09:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.907 03:09:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:47.907 03:09:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:49.805 03:09:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:49.805 03:09:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:49.805 03:09:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:21:49.805 03:09:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:49.805 03:09:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.805 03:09:44 -- common/autotest_common.sh@1187 -- # return 0 00:21:49.805 03:09:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.805 03:09:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:21:50.371 03:09:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:50.371 03:09:45 -- common/autotest_common.sh@1177 -- # local i=0 00:21:50.371 03:09:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.371 03:09:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:50.371 03:09:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:52.899 03:09:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:52.899 03:09:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:52.899 03:09:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:21:52.899 03:09:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:52.899 03:09:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.899 03:09:47 -- common/autotest_common.sh@1187 -- # return 0 00:21:52.899 03:09:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.899 03:09:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:21:53.465 03:09:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:53.465 03:09:48 -- common/autotest_common.sh@1177 -- # local i=0 00:21:53.465 03:09:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.465 03:09:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:53.465 03:09:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:55.364 03:09:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:55.364 03:09:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:55.364 03:09:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:21:55.364 03:09:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:55.364 03:09:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.364 03:09:50 -- common/autotest_common.sh@1187 -- # return 0 00:21:55.364 03:09:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.364 03:09:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:21:56.298 03:09:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:56.298 03:09:51 -- common/autotest_common.sh@1177 -- # local i=0 00:21:56.298 03:09:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:56.298 03:09:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:56.298 03:09:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:58.201 03:09:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:58.201 03:09:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:58.201 03:09:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:21:58.201 03:09:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:58.201 03:09:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:58.201 03:09:53 -- common/autotest_common.sh@1187 -- # return 0 00:21:58.201 03:09:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.201 03:09:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:21:59.132 03:09:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:59.132 03:09:54 -- common/autotest_common.sh@1177 -- # local i=0 00:21:59.132 03:09:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.132 03:09:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:59.132 03:09:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:01.028 03:09:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:01.028 03:09:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:01.028 03:09:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:01.028 03:09:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:01.028 03:09:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.028 03:09:56 -- common/autotest_common.sh@1187 -- # return 0 00:22:01.028 03:09:56 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.028 03:09:56 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:01.966 03:09:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:01.966 03:09:57 -- common/autotest_common.sh@1177 -- # local i=0 00:22:01.966 03:09:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.966 03:09:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:01.966 03:09:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:04.526 03:09:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:04.526 03:09:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:04.526 03:09:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:04.526 03:09:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:04.526 03:09:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.526 03:09:59 -- common/autotest_common.sh@1187 -- # return 0 00:22:04.526 03:09:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.526 03:09:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:05.091 03:10:00 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:05.091 03:10:00 -- common/autotest_common.sh@1177 -- # local i=0 00:22:05.091 03:10:00 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:05.091 03:10:00 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:05.091 03:10:00 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:06.989 03:10:02 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:06.989 03:10:02 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:06.989 03:10:02 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:06.989 03:10:02 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:06.989 03:10:02 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.989 03:10:02 -- common/autotest_common.sh@1187 -- # return 0 00:22:06.989 03:10:02 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.989 03:10:02 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:07.922 03:10:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:07.922 03:10:03 -- common/autotest_common.sh@1177 -- # local i=0 00:22:07.922 03:10:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.922 03:10:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:07.922 03:10:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:09.851 03:10:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:09.851 03:10:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:09.851 03:10:05 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:09.851 03:10:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:09.851 03:10:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.851 03:10:05 -- common/autotest_common.sh@1187 -- # return 0 00:22:09.851 03:10:05 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:10.109 [global] 00:22:10.109 thread=1 00:22:10.109 invalidate=1 00:22:10.109 rw=read 00:22:10.109 time_based=1 00:22:10.109 runtime=10 00:22:10.109 ioengine=libaio 00:22:10.109 direct=1 00:22:10.109 bs=262144 00:22:10.109 iodepth=64 00:22:10.109 norandommap=1 00:22:10.109 numjobs=1 00:22:10.109 00:22:10.109 [job0] 00:22:10.109 filename=/dev/nvme0n1 00:22:10.109 [job1] 00:22:10.109 filename=/dev/nvme10n1 00:22:10.109 [job2] 00:22:10.109 filename=/dev/nvme1n1 00:22:10.109 [job3] 00:22:10.109 filename=/dev/nvme2n1 00:22:10.109 [job4] 00:22:10.109 filename=/dev/nvme3n1 00:22:10.109 [job5] 00:22:10.109 filename=/dev/nvme4n1 00:22:10.109 [job6] 00:22:10.109 filename=/dev/nvme5n1 00:22:10.109 [job7] 00:22:10.109 filename=/dev/nvme6n1 00:22:10.109 [job8] 00:22:10.109 filename=/dev/nvme7n1 00:22:10.109 [job9] 00:22:10.109 filename=/dev/nvme8n1 00:22:10.109 [job10] 00:22:10.109 filename=/dev/nvme9n1 00:22:10.109 Could not set queue depth (nvme0n1) 00:22:10.109 Could not set queue depth (nvme10n1) 00:22:10.109 Could not set queue depth (nvme1n1) 00:22:10.109 Could not set queue depth (nvme2n1) 00:22:10.109 Could not set queue depth (nvme3n1) 00:22:10.109 Could not set queue depth (nvme4n1) 00:22:10.109 Could not set queue depth (nvme5n1) 00:22:10.109 Could not set queue depth (nvme6n1) 00:22:10.109 Could not set queue depth (nvme7n1) 00:22:10.109 Could not set queue depth (nvme8n1) 00:22:10.109 Could not set queue depth (nvme9n1) 00:22:10.367 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:10.367 fio-3.35 00:22:10.367 Starting 11 threads 00:22:22.581 00:22:22.581 job0: (groupid=0, jobs=1): err= 0: pid=2051206: Sun Jul 14 03:10:15 2024 00:22:22.581 read: IOPS=513, BW=128MiB/s (134MB/s)(1303MiB/10161msec) 00:22:22.581 slat (usec): min=10, max=244057, avg=1439.85, stdev=7816.76 00:22:22.581 clat (msec): min=2, max=519, avg=123.19, stdev=96.00 00:22:22.581 lat (msec): min=2, max=519, avg=124.63, stdev=97.06 00:22:22.581 clat percentiles (msec): 00:22:22.581 | 1.00th=[ 6], 5.00th=[ 17], 10.00th=[ 25], 20.00th=[ 35], 00:22:22.581 | 30.00th=[ 51], 40.00th=[ 79], 50.00th=[ 103], 60.00th=[ 132], 00:22:22.581 | 70.00th=[ 161], 80.00th=[ 194], 90.00th=[ 271], 95.00th=[ 300], 00:22:22.581 | 99.00th=[ 418], 99.50th=[ 502], 99.90th=[ 518], 99.95th=[ 518], 00:22:22.581 | 99.99th=[ 518] 00:22:22.581 bw ( KiB/s): min=53760, max=418816, per=9.03%, avg=131830.55, stdev=82218.69, samples=20 00:22:22.581 iops : min= 210, max= 1636, avg=514.95, stdev=321.17, samples=20 00:22:22.581 lat (msec) : 4=0.23%, 10=1.71%, 20=4.70%, 50=23.42%, 100=19.22% 00:22:22.581 lat (msec) : 250=38.10%, 500=12.09%, 750=0.54% 00:22:22.581 cpu : usr=0.29%, sys=1.53%, ctx=1264, majf=0, minf=4097 00:22:22.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:22.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.581 issued rwts: total=5213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.581 job1: (groupid=0, jobs=1): err= 0: pid=2051207: Sun Jul 14 03:10:15 2024 00:22:22.581 read: IOPS=720, BW=180MiB/s (189MB/s)(1807MiB/10030msec) 00:22:22.581 slat (usec): min=9, max=158646, avg=1145.47, stdev=5034.95 00:22:22.581 clat (msec): min=2, max=520, avg=87.62, stdev=71.27 00:22:22.581 lat (msec): min=2, max=580, avg=88.76, stdev=72.00 00:22:22.581 clat percentiles (msec): 00:22:22.581 | 1.00th=[ 6], 5.00th=[ 24], 10.00th=[ 33], 20.00th=[ 39], 00:22:22.581 | 30.00th=[ 50], 40.00th=[ 61], 50.00th=[ 70], 60.00th=[ 82], 00:22:22.581 | 70.00th=[ 94], 80.00th=[ 118], 90.00th=[ 169], 95.00th=[ 205], 00:22:22.581 | 99.00th=[ 422], 99.50th=[ 460], 99.90th=[ 514], 99.95th=[ 523], 00:22:22.581 | 99.99th=[ 523] 00:22:22.581 bw ( KiB/s): min=33280, max=467456, per=12.57%, avg=183390.35, stdev=99509.26, samples=20 00:22:22.581 iops : min= 130, max= 1826, avg=716.35, stdev=388.71, samples=20 00:22:22.581 lat (msec) : 4=0.37%, 10=1.85%, 20=2.08%, 50=25.85%, 100=44.12% 00:22:22.581 lat (msec) : 250=22.14%, 500=3.34%, 750=0.25% 00:22:22.581 cpu : usr=0.47%, sys=2.46%, ctx=1606, majf=0, minf=4097 00:22:22.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:22.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.581 issued rwts: total=7226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.581 job2: (groupid=0, jobs=1): err= 0: pid=2051208: Sun Jul 14 03:10:15 2024 00:22:22.581 read: IOPS=468, BW=117MiB/s (123MB/s)(1193MiB/10194msec) 00:22:22.581 slat (usec): min=12, max=89069, avg=1817.03, stdev=6596.61 00:22:22.581 clat (usec): min=1641, max=425671, avg=134734.89, stdev=80766.81 00:22:22.581 lat (usec): min=1707, max=445358, avg=136551.92, stdev=82042.32 00:22:22.581 clat percentiles (msec): 00:22:22.581 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 23], 20.00th=[ 62], 00:22:22.581 | 30.00th=[ 78], 40.00th=[ 104], 50.00th=[ 129], 60.00th=[ 159], 00:22:22.581 | 70.00th=[ 186], 80.00th=[ 203], 90.00th=[ 226], 95.00th=[ 275], 00:22:22.581 | 99.00th=[ 363], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 426], 00:22:22.581 | 99.99th=[ 426] 00:22:22.581 bw ( KiB/s): min=54784, max=263680, per=8.26%, avg=120538.95, stdev=52733.96, samples=20 00:22:22.581 iops : min= 214, max= 1030, avg=470.85, stdev=205.99, samples=20 00:22:22.581 lat (msec) : 2=0.02%, 4=0.67%, 10=2.37%, 20=5.20%, 50=5.39% 00:22:22.581 lat (msec) : 100=24.94%, 250=54.23%, 500=7.19% 00:22:22.581 cpu : usr=0.35%, sys=1.63%, ctx=1120, majf=0, minf=4097 00:22:22.581 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:22.581 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.581 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.581 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.581 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.581 job3: (groupid=0, jobs=1): err= 0: pid=2051209: Sun Jul 14 03:10:15 2024 00:22:22.581 read: IOPS=363, BW=90.8MiB/s (95.2MB/s)(926MiB/10194msec) 00:22:22.581 slat (usec): min=11, max=342978, avg=2489.39, stdev=10540.37 00:22:22.581 clat (msec): min=28, max=518, avg=173.47, stdev=81.32 00:22:22.581 lat (msec): min=41, max=518, avg=175.96, stdev=82.07 00:22:22.581 clat percentiles (msec): 00:22:22.581 | 1.00th=[ 52], 5.00th=[ 77], 10.00th=[ 86], 20.00th=[ 102], 00:22:22.581 | 30.00th=[ 114], 40.00th=[ 140], 50.00th=[ 163], 60.00th=[ 190], 00:22:22.581 | 70.00th=[ 209], 80.00th=[ 226], 90.00th=[ 279], 95.00th=[ 321], 00:22:22.581 | 99.00th=[ 464], 99.50th=[ 485], 99.90th=[ 493], 99.95th=[ 498], 00:22:22.581 | 99.99th=[ 518] 00:22:22.581 bw ( KiB/s): min=52736, max=164352, per=6.39%, avg=93214.40, stdev=31970.27, samples=20 00:22:22.581 iops : min= 206, max= 642, avg=364.10, stdev=124.85, samples=20 00:22:22.582 lat (msec) : 50=0.92%, 100=17.49%, 250=66.82%, 500=14.74%, 750=0.03% 00:22:22.582 cpu : usr=0.26%, sys=1.28%, ctx=793, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=3704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job4: (groupid=0, jobs=1): err= 0: pid=2051210: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=501, BW=125MiB/s (131MB/s)(1274MiB/10160msec) 00:22:22.582 slat (usec): min=11, max=253628, avg=1627.36, stdev=7406.22 00:22:22.582 clat (usec): min=1105, max=601786, avg=125852.15, stdev=94252.80 00:22:22.582 lat (usec): min=1125, max=601826, avg=127479.51, stdev=95474.59 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 18], 20.00th=[ 43], 00:22:22.582 | 30.00th=[ 68], 40.00th=[ 90], 50.00th=[ 108], 60.00th=[ 127], 00:22:22.582 | 70.00th=[ 165], 80.00th=[ 207], 90.00th=[ 239], 95.00th=[ 305], 00:22:22.582 | 99.00th=[ 468], 99.50th=[ 485], 99.90th=[ 506], 99.95th=[ 550], 00:22:22.582 | 99.99th=[ 600] 00:22:22.582 bw ( KiB/s): min=37888, max=287232, per=8.83%, avg=128835.80, stdev=58833.92, samples=20 00:22:22.582 iops : min= 148, max= 1122, avg=503.25, stdev=229.83, samples=20 00:22:22.582 lat (msec) : 2=0.57%, 4=1.04%, 10=6.14%, 20=2.88%, 50=12.38% 00:22:22.582 lat (msec) : 100=23.18%, 250=45.33%, 500=8.30%, 750=0.18% 00:22:22.582 cpu : usr=0.30%, sys=1.67%, ctx=1261, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=5096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job5: (groupid=0, jobs=1): err= 0: pid=2051211: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=509, BW=127MiB/s (133MB/s)(1294MiB/10167msec) 00:22:22.582 slat (usec): min=9, max=271841, avg=1637.04, stdev=7592.25 00:22:22.582 clat (usec): min=1962, max=626927, avg=123938.43, stdev=91282.89 00:22:22.582 lat (usec): min=1984, max=634097, avg=125575.47, stdev=92411.09 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 5], 5.00th=[ 13], 10.00th=[ 25], 20.00th=[ 47], 00:22:22.582 | 30.00th=[ 65], 40.00th=[ 92], 50.00th=[ 110], 60.00th=[ 129], 00:22:22.582 | 70.00th=[ 150], 80.00th=[ 205], 90.00th=[ 236], 95.00th=[ 264], 00:22:22.582 | 99.00th=[ 472], 99.50th=[ 617], 99.90th=[ 625], 99.95th=[ 625], 00:22:22.582 | 99.99th=[ 625] 00:22:22.582 bw ( KiB/s): min=62464, max=271872, per=8.97%, avg=130918.40, stdev=66346.38, samples=20 00:22:22.582 iops : min= 244, max= 1062, avg=511.40, stdev=259.17, samples=20 00:22:22.582 lat (msec) : 2=0.02%, 4=0.77%, 10=2.59%, 20=4.75%, 50=14.89% 00:22:22.582 lat (msec) : 100=21.96%, 250=48.23%, 500=5.79%, 750=0.99% 00:22:22.582 cpu : usr=0.31%, sys=1.55%, ctx=1262, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=5177,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job6: (groupid=0, jobs=1): err= 0: pid=2051212: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=466, BW=117MiB/s (122MB/s)(1184MiB/10158msec) 00:22:22.582 slat (usec): min=9, max=276547, avg=1836.21, stdev=7972.09 00:22:22.582 clat (msec): min=8, max=468, avg=135.29, stdev=77.32 00:22:22.582 lat (msec): min=8, max=468, avg=137.13, stdev=78.12 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 33], 5.00th=[ 52], 10.00th=[ 59], 20.00th=[ 75], 00:22:22.582 | 30.00th=[ 90], 40.00th=[ 104], 50.00th=[ 115], 60.00th=[ 133], 00:22:22.582 | 70.00th=[ 157], 80.00th=[ 182], 90.00th=[ 234], 95.00th=[ 284], 00:22:22.582 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 456], 99.95th=[ 464], 00:22:22.582 | 99.99th=[ 468] 00:22:22.582 bw ( KiB/s): min=42496, max=198656, per=8.20%, avg=119619.15, stdev=45485.89, samples=20 00:22:22.582 iops : min= 166, max= 776, avg=467.25, stdev=177.69, samples=20 00:22:22.582 lat (msec) : 10=0.04%, 20=0.17%, 50=3.74%, 100=33.82%, 250=54.32% 00:22:22.582 lat (msec) : 500=7.92% 00:22:22.582 cpu : usr=0.32%, sys=1.45%, ctx=990, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=4737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job7: (groupid=0, jobs=1): err= 0: pid=2051213: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=578, BW=145MiB/s (152MB/s)(1451MiB/10032msec) 00:22:22.582 slat (usec): min=9, max=214894, avg=1235.86, stdev=6516.83 00:22:22.582 clat (usec): min=1044, max=428832, avg=109329.83, stdev=86158.31 00:22:22.582 lat (usec): min=1063, max=521189, avg=110565.69, stdev=87180.96 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 13], 20.00th=[ 30], 00:22:22.582 | 30.00th=[ 44], 40.00th=[ 59], 50.00th=[ 92], 60.00th=[ 126], 00:22:22.582 | 70.00th=[ 155], 80.00th=[ 186], 90.00th=[ 224], 95.00th=[ 262], 00:22:22.582 | 99.00th=[ 355], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 426], 00:22:22.582 | 99.99th=[ 430] 00:22:22.582 bw ( KiB/s): min=48640, max=354304, per=10.07%, avg=146955.65, stdev=77141.16, samples=20 00:22:22.582 iops : min= 190, max= 1384, avg=574.00, stdev=301.35, samples=20 00:22:22.582 lat (msec) : 2=0.26%, 4=1.60%, 10=6.34%, 20=6.93%, 50=21.33% 00:22:22.582 lat (msec) : 100=15.27%, 250=41.93%, 500=6.34% 00:22:22.582 cpu : usr=0.30%, sys=1.63%, ctx=1459, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=5803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job8: (groupid=0, jobs=1): err= 0: pid=2051214: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=426, BW=107MiB/s (112MB/s)(1083MiB/10158msec) 00:22:22.582 slat (usec): min=9, max=382718, avg=1454.87, stdev=9342.97 00:22:22.582 clat (usec): min=1613, max=534908, avg=148481.49, stdev=97246.88 00:22:22.582 lat (usec): min=1645, max=569907, avg=149936.37, stdev=98205.35 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 33], 20.00th=[ 71], 00:22:22.582 | 30.00th=[ 94], 40.00th=[ 112], 50.00th=[ 128], 60.00th=[ 161], 00:22:22.582 | 70.00th=[ 194], 80.00th=[ 224], 90.00th=[ 271], 95.00th=[ 305], 00:22:22.582 | 99.00th=[ 468], 99.50th=[ 489], 99.90th=[ 510], 99.95th=[ 531], 00:22:22.582 | 99.99th=[ 535] 00:22:22.582 bw ( KiB/s): min=64000, max=183296, per=7.49%, avg=109291.10, stdev=38355.49, samples=20 00:22:22.582 iops : min= 250, max= 716, avg=426.90, stdev=149.83, samples=20 00:22:22.582 lat (msec) : 2=0.07%, 4=0.65%, 10=4.16%, 20=3.02%, 50=9.51% 00:22:22.582 lat (msec) : 100=15.24%, 250=53.99%, 500=13.23%, 750=0.14% 00:22:22.582 cpu : usr=0.23%, sys=1.06%, ctx=1178, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=4332,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job9: (groupid=0, jobs=1): err= 0: pid=2051223: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=538, BW=135MiB/s (141MB/s)(1351MiB/10042msec) 00:22:22.582 slat (usec): min=9, max=210049, avg=1135.03, stdev=7778.97 00:22:22.582 clat (usec): min=1014, max=591482, avg=117679.82, stdev=92959.08 00:22:22.582 lat (usec): min=1039, max=616999, avg=118814.86, stdev=93807.99 00:22:22.582 clat percentiles (msec): 00:22:22.582 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 22], 00:22:22.582 | 30.00th=[ 55], 40.00th=[ 77], 50.00th=[ 104], 60.00th=[ 136], 00:22:22.582 | 70.00th=[ 165], 80.00th=[ 188], 90.00th=[ 251], 95.00th=[ 284], 00:22:22.582 | 99.00th=[ 409], 99.50th=[ 418], 99.90th=[ 542], 99.95th=[ 542], 00:22:22.582 | 99.99th=[ 592] 00:22:22.582 bw ( KiB/s): min=71168, max=337408, per=9.37%, avg=136742.30, stdev=62192.75, samples=20 00:22:22.582 iops : min= 278, max= 1318, avg=534.10, stdev=242.95, samples=20 00:22:22.582 lat (msec) : 2=0.13%, 4=1.26%, 10=7.47%, 20=10.31%, 50=9.31% 00:22:22.582 lat (msec) : 100=20.28%, 250=40.78%, 500=10.05%, 750=0.43% 00:22:22.582 cpu : usr=0.21%, sys=1.38%, ctx=1361, majf=0, minf=3721 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=5405,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 job10: (groupid=0, jobs=1): err= 0: pid=2051224: Sun Jul 14 03:10:15 2024 00:22:22.582 read: IOPS=653, BW=163MiB/s (171MB/s)(1662MiB/10166msec) 00:22:22.582 slat (usec): min=9, max=193980, avg=933.10, stdev=6362.10 00:22:22.582 clat (usec): min=978, max=532792, avg=96865.78, stdev=96616.47 00:22:22.582 lat (usec): min=994, max=532836, avg=97798.87, stdev=97503.10 00:22:22.582 clat percentiles (usec): 00:22:22.582 | 1.00th=[ 1975], 5.00th=[ 4424], 10.00th=[ 9765], 20.00th=[ 21365], 00:22:22.582 | 30.00th=[ 28181], 40.00th=[ 36963], 50.00th=[ 45351], 60.00th=[ 87557], 00:22:22.582 | 70.00th=[137364], 80.00th=[187696], 90.00th=[235930], 95.00th=[295699], 00:22:22.582 | 99.00th=[408945], 99.50th=[429917], 99.90th=[526386], 99.95th=[526386], 00:22:22.582 | 99.99th=[534774] 00:22:22.582 bw ( KiB/s): min=50176, max=349184, per=11.55%, avg=168550.40, stdev=91358.51, samples=20 00:22:22.582 iops : min= 196, max= 1364, avg=658.40, stdev=356.87, samples=20 00:22:22.582 lat (usec) : 1000=0.03% 00:22:22.582 lat (msec) : 2=1.01%, 4=3.41%, 10=5.79%, 20=8.39%, 50=33.50% 00:22:22.582 lat (msec) : 100=10.98%, 250=28.73%, 500=8.02%, 750=0.14% 00:22:22.582 cpu : usr=0.41%, sys=1.70%, ctx=1751, majf=0, minf=4097 00:22:22.582 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:22:22.582 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:22.582 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:22.582 issued rwts: total=6648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:22.582 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:22.582 00:22:22.582 Run status group 0 (all jobs): 00:22:22.583 READ: bw=1425MiB/s (1494MB/s), 90.8MiB/s-180MiB/s (95.2MB/s-189MB/s), io=14.2GiB (15.2GB), run=10030-10194msec 00:22:22.583 00:22:22.583 Disk stats (read/write): 00:22:22.583 nvme0n1: ios=10174/0, merge=0/0, ticks=1235410/0, in_queue=1235410, util=97.26% 00:22:22.583 nvme10n1: ios=14174/0, merge=0/0, ticks=1238836/0, in_queue=1238836, util=97.45% 00:22:22.583 nvme1n1: ios=9543/0, merge=0/0, ticks=1262169/0, in_queue=1262169, util=97.79% 00:22:22.583 nvme2n1: ios=7405/0, merge=0/0, ticks=1257985/0, in_queue=1257985, util=97.92% 00:22:22.583 nvme3n1: ios=10023/0, merge=0/0, ticks=1225221/0, in_queue=1225221, util=97.94% 00:22:22.583 nvme4n1: ios=10086/0, merge=0/0, ticks=1233245/0, in_queue=1233245, util=98.26% 00:22:22.583 nvme5n1: ios=9346/0, merge=0/0, ticks=1236208/0, in_queue=1236208, util=98.43% 00:22:22.583 nvme6n1: ios=11373/0, merge=0/0, ticks=1239907/0, in_queue=1239907, util=98.50% 00:22:22.583 nvme7n1: ios=8488/0, merge=0/0, ticks=1235086/0, in_queue=1235086, util=98.92% 00:22:22.583 nvme8n1: ios=10622/0, merge=0/0, ticks=1238283/0, in_queue=1238283, util=99.12% 00:22:22.583 nvme9n1: ios=13118/0, merge=0/0, ticks=1234616/0, in_queue=1234616, util=99.23% 00:22:22.583 03:10:15 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:22.583 [global] 00:22:22.583 thread=1 00:22:22.583 invalidate=1 00:22:22.583 rw=randwrite 00:22:22.583 time_based=1 00:22:22.583 runtime=10 00:22:22.583 ioengine=libaio 00:22:22.583 direct=1 00:22:22.583 bs=262144 00:22:22.583 iodepth=64 00:22:22.583 norandommap=1 00:22:22.583 numjobs=1 00:22:22.583 00:22:22.583 [job0] 00:22:22.583 filename=/dev/nvme0n1 00:22:22.583 [job1] 00:22:22.583 filename=/dev/nvme10n1 00:22:22.583 [job2] 00:22:22.583 filename=/dev/nvme1n1 00:22:22.583 [job3] 00:22:22.583 filename=/dev/nvme2n1 00:22:22.583 [job4] 00:22:22.583 filename=/dev/nvme3n1 00:22:22.583 [job5] 00:22:22.583 filename=/dev/nvme4n1 00:22:22.583 [job6] 00:22:22.583 filename=/dev/nvme5n1 00:22:22.583 [job7] 00:22:22.583 filename=/dev/nvme6n1 00:22:22.583 [job8] 00:22:22.583 filename=/dev/nvme7n1 00:22:22.583 [job9] 00:22:22.583 filename=/dev/nvme8n1 00:22:22.583 [job10] 00:22:22.583 filename=/dev/nvme9n1 00:22:22.583 Could not set queue depth (nvme0n1) 00:22:22.583 Could not set queue depth (nvme10n1) 00:22:22.583 Could not set queue depth (nvme1n1) 00:22:22.583 Could not set queue depth (nvme2n1) 00:22:22.583 Could not set queue depth (nvme3n1) 00:22:22.583 Could not set queue depth (nvme4n1) 00:22:22.583 Could not set queue depth (nvme5n1) 00:22:22.583 Could not set queue depth (nvme6n1) 00:22:22.583 Could not set queue depth (nvme7n1) 00:22:22.583 Could not set queue depth (nvme8n1) 00:22:22.583 Could not set queue depth (nvme9n1) 00:22:22.583 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:22.583 fio-3.35 00:22:22.583 Starting 11 threads 00:22:32.562 00:22:32.562 job0: (groupid=0, jobs=1): err= 0: pid=2052265: Sun Jul 14 03:10:26 2024 00:22:32.562 write: IOPS=453, BW=113MiB/s (119MB/s)(1144MiB/10085msec); 0 zone resets 00:22:32.562 slat (usec): min=23, max=88609, avg=2120.31, stdev=4626.84 00:22:32.562 clat (msec): min=17, max=348, avg=138.83, stdev=61.13 00:22:32.562 lat (msec): min=17, max=348, avg=140.95, stdev=61.92 00:22:32.562 clat percentiles (msec): 00:22:32.562 | 1.00th=[ 54], 5.00th=[ 63], 10.00th=[ 72], 20.00th=[ 83], 00:22:32.562 | 30.00th=[ 89], 40.00th=[ 107], 50.00th=[ 132], 60.00th=[ 159], 00:22:32.562 | 70.00th=[ 174], 80.00th=[ 188], 90.00th=[ 220], 95.00th=[ 251], 00:22:32.562 | 99.00th=[ 321], 99.50th=[ 342], 99.90th=[ 347], 99.95th=[ 351], 00:22:32.562 | 99.99th=[ 351] 00:22:32.562 bw ( KiB/s): min=61440, max=210944, per=9.43%, avg=115540.00, stdev=46757.11, samples=20 00:22:32.562 iops : min= 240, max= 824, avg=451.30, stdev=182.66, samples=20 00:22:32.562 lat (msec) : 20=0.04%, 50=0.68%, 100=37.08%, 250=57.02%, 500=5.18% 00:22:32.562 cpu : usr=1.37%, sys=1.32%, ctx=1333, majf=0, minf=1 00:22:32.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:32.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.562 issued rwts: total=0,4577,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.562 job1: (groupid=0, jobs=1): err= 0: pid=2052277: Sun Jul 14 03:10:26 2024 00:22:32.562 write: IOPS=422, BW=106MiB/s (111MB/s)(1077MiB/10188msec); 0 zone resets 00:22:32.562 slat (usec): min=22, max=132922, avg=1679.66, stdev=5057.44 00:22:32.562 clat (msec): min=2, max=415, avg=149.61, stdev=72.10 00:22:32.562 lat (msec): min=2, max=415, avg=151.29, stdev=72.99 00:22:32.562 clat percentiles (msec): 00:22:32.562 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 43], 20.00th=[ 82], 00:22:32.562 | 30.00th=[ 109], 40.00th=[ 136], 50.00th=[ 159], 60.00th=[ 174], 00:22:32.562 | 70.00th=[ 190], 80.00th=[ 215], 90.00th=[ 241], 95.00th=[ 255], 00:22:32.562 | 99.00th=[ 300], 99.50th=[ 338], 99.90th=[ 393], 99.95th=[ 393], 00:22:32.562 | 99.99th=[ 418] 00:22:32.562 bw ( KiB/s): min=63488, max=209408, per=8.87%, avg=108601.70, stdev=36358.60, samples=20 00:22:32.562 iops : min= 248, max= 818, avg=424.20, stdev=142.04, samples=20 00:22:32.562 lat (msec) : 4=0.12%, 10=1.44%, 20=2.62%, 50=7.04%, 100=15.44% 00:22:32.562 lat (msec) : 250=66.26%, 500=7.08% 00:22:32.562 cpu : usr=1.39%, sys=1.40%, ctx=2394, majf=0, minf=1 00:22:32.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:22:32.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.562 issued rwts: total=0,4306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.562 job2: (groupid=0, jobs=1): err= 0: pid=2052278: Sun Jul 14 03:10:26 2024 00:22:32.562 write: IOPS=411, BW=103MiB/s (108MB/s)(1038MiB/10095msec); 0 zone resets 00:22:32.562 slat (usec): min=25, max=280232, avg=1817.08, stdev=7355.22 00:22:32.562 clat (msec): min=2, max=691, avg=153.74, stdev=96.41 00:22:32.562 lat (msec): min=3, max=691, avg=155.56, stdev=97.33 00:22:32.562 clat percentiles (msec): 00:22:32.562 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 58], 20.00th=[ 90], 00:22:32.562 | 30.00th=[ 110], 40.00th=[ 125], 50.00th=[ 138], 60.00th=[ 155], 00:22:32.562 | 70.00th=[ 184], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 268], 00:22:32.562 | 99.00th=[ 659], 99.50th=[ 676], 99.90th=[ 684], 99.95th=[ 693], 00:22:32.562 | 99.99th=[ 693] 00:22:32.562 bw ( KiB/s): min=16384, max=190976, per=8.54%, avg=104624.80, stdev=41104.42, samples=20 00:22:32.562 iops : min= 64, max= 746, avg=408.65, stdev=160.51, samples=20 00:22:32.562 lat (msec) : 4=0.05%, 10=0.36%, 20=1.64%, 50=6.43%, 100=16.96% 00:22:32.562 lat (msec) : 250=67.12%, 500=6.02%, 750=1.42% 00:22:32.562 cpu : usr=1.49%, sys=1.42%, ctx=2192, majf=0, minf=1 00:22:32.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:32.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.562 issued rwts: total=0,4151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.562 job3: (groupid=0, jobs=1): err= 0: pid=2052281: Sun Jul 14 03:10:26 2024 00:22:32.562 write: IOPS=392, BW=98.1MiB/s (103MB/s)(1000MiB/10190msec); 0 zone resets 00:22:32.562 slat (usec): min=25, max=131119, avg=2339.19, stdev=6779.76 00:22:32.562 clat (msec): min=4, max=417, avg=160.59, stdev=79.77 00:22:32.562 lat (msec): min=4, max=417, avg=162.93, stdev=80.83 00:22:32.562 clat percentiles (msec): 00:22:32.562 | 1.00th=[ 16], 5.00th=[ 50], 10.00th=[ 60], 20.00th=[ 81], 00:22:32.562 | 30.00th=[ 122], 40.00th=[ 138], 50.00th=[ 155], 60.00th=[ 176], 00:22:32.562 | 70.00th=[ 197], 80.00th=[ 226], 90.00th=[ 271], 95.00th=[ 317], 00:22:32.562 | 99.00th=[ 359], 99.50th=[ 368], 99.90th=[ 393], 99.95th=[ 418], 00:22:32.562 | 99.99th=[ 418] 00:22:32.562 bw ( KiB/s): min=51097, max=233472, per=8.23%, avg=100746.85, stdev=43773.31, samples=20 00:22:32.562 iops : min= 199, max= 912, avg=393.50, stdev=171.03, samples=20 00:22:32.562 lat (msec) : 10=0.48%, 20=1.00%, 50=3.60%, 100=17.85%, 250=62.67% 00:22:32.562 lat (msec) : 500=14.40% 00:22:32.562 cpu : usr=1.07%, sys=1.34%, ctx=1317, majf=0, minf=1 00:22:32.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:32.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.562 issued rwts: total=0,3999,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.562 job4: (groupid=0, jobs=1): err= 0: pid=2052282: Sun Jul 14 03:10:26 2024 00:22:32.562 write: IOPS=418, BW=105MiB/s (110MB/s)(1056MiB/10098msec); 0 zone resets 00:22:32.562 slat (usec): min=21, max=330090, avg=1958.72, stdev=7525.15 00:22:32.562 clat (msec): min=2, max=824, avg=150.99, stdev=110.50 00:22:32.562 lat (msec): min=2, max=824, avg=152.94, stdev=111.87 00:22:32.562 clat percentiles (msec): 00:22:32.562 | 1.00th=[ 15], 5.00th=[ 50], 10.00th=[ 65], 20.00th=[ 74], 00:22:32.562 | 30.00th=[ 90], 40.00th=[ 108], 50.00th=[ 128], 60.00th=[ 140], 00:22:32.562 | 70.00th=[ 165], 80.00th=[ 211], 90.00th=[ 264], 95.00th=[ 326], 00:22:32.562 | 99.00th=[ 751], 99.50th=[ 785], 99.90th=[ 818], 99.95th=[ 827], 00:22:32.562 | 99.99th=[ 827] 00:22:32.562 bw ( KiB/s): min=20480, max=216576, per=8.70%, avg=106503.35, stdev=58675.95, samples=20 00:22:32.562 iops : min= 80, max= 846, avg=415.95, stdev=229.21, samples=20 00:22:32.562 lat (msec) : 4=0.05%, 10=0.43%, 20=1.23%, 50=3.34%, 100=31.49% 00:22:32.562 lat (msec) : 250=50.66%, 500=11.32%, 750=0.45%, 1000=1.04% 00:22:32.562 cpu : usr=1.41%, sys=1.27%, ctx=1810, majf=0, minf=1 00:22:32.562 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:22:32.562 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.562 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.562 issued rwts: total=0,4224,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.562 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job5: (groupid=0, jobs=1): err= 0: pid=2052283: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=490, BW=123MiB/s (129MB/s)(1250MiB/10188msec); 0 zone resets 00:22:32.563 slat (usec): min=16, max=139622, avg=1503.55, stdev=4802.08 00:22:32.563 clat (msec): min=2, max=571, avg=128.83, stdev=71.83 00:22:32.563 lat (msec): min=2, max=571, avg=130.34, stdev=72.74 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 63], 20.00th=[ 83], 00:22:32.563 | 30.00th=[ 88], 40.00th=[ 96], 50.00th=[ 112], 60.00th=[ 134], 00:22:32.563 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 205], 95.00th=[ 234], 00:22:32.563 | 99.00th=[ 447], 99.50th=[ 502], 99.90th=[ 535], 99.95th=[ 558], 00:22:32.563 | 99.99th=[ 575] 00:22:32.563 bw ( KiB/s): min=46592, max=214016, per=10.32%, avg=126357.60, stdev=48087.39, samples=20 00:22:32.563 iops : min= 182, max= 836, avg=493.55, stdev=187.81, samples=20 00:22:32.563 lat (msec) : 4=0.06%, 10=0.16%, 20=1.16%, 50=5.80%, 100=35.46% 00:22:32.563 lat (msec) : 250=52.82%, 500=4.02%, 750=0.52% 00:22:32.563 cpu : usr=1.46%, sys=1.52%, ctx=2541, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,5000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job6: (groupid=0, jobs=1): err= 0: pid=2052285: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=338, BW=84.7MiB/s (88.8MB/s)(855MiB/10092msec); 0 zone resets 00:22:32.563 slat (usec): min=18, max=919757, avg=1869.74, stdev=17434.67 00:22:32.563 clat (msec): min=2, max=2035, avg=186.91, stdev=236.97 00:22:32.563 lat (msec): min=2, max=2035, avg=188.78, stdev=238.14 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 9], 5.00th=[ 21], 10.00th=[ 40], 20.00th=[ 71], 00:22:32.563 | 30.00th=[ 105], 40.00th=[ 131], 50.00th=[ 142], 60.00th=[ 169], 00:22:32.563 | 70.00th=[ 197], 80.00th=[ 232], 90.00th=[ 288], 95.00th=[ 338], 00:22:32.563 | 99.00th=[ 1334], 99.50th=[ 2022], 99.90th=[ 2039], 99.95th=[ 2039], 00:22:32.563 | 99.99th=[ 2039] 00:22:32.563 bw ( KiB/s): min=15360, max=174080, per=7.39%, avg=90456.47, stdev=37913.73, samples=19 00:22:32.563 iops : min= 60, max= 680, avg=353.32, stdev=148.13, samples=19 00:22:32.563 lat (msec) : 4=0.20%, 10=1.40%, 20=3.25%, 50=9.36%, 100=14.50% 00:22:32.563 lat (msec) : 250=53.36%, 500=14.24%, 750=0.56%, 1000=1.29%, 2000=0.96% 00:22:32.563 lat (msec) : >=2000=0.88% 00:22:32.563 cpu : usr=0.97%, sys=1.08%, ctx=2208, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,3420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job7: (groupid=0, jobs=1): err= 0: pid=2052286: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=548, BW=137MiB/s (144MB/s)(1384MiB/10095msec); 0 zone resets 00:22:32.563 slat (usec): min=20, max=177668, avg=1245.60, stdev=4349.97 00:22:32.563 clat (msec): min=2, max=383, avg=115.34, stdev=57.56 00:22:32.563 lat (msec): min=2, max=383, avg=116.59, stdev=58.07 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 9], 5.00th=[ 25], 10.00th=[ 46], 20.00th=[ 61], 00:22:32.563 | 30.00th=[ 79], 40.00th=[ 96], 50.00th=[ 118], 60.00th=[ 132], 00:22:32.563 | 70.00th=[ 142], 80.00th=[ 165], 90.00th=[ 194], 95.00th=[ 213], 00:22:32.563 | 99.00th=[ 251], 99.50th=[ 275], 99.90th=[ 347], 99.95th=[ 368], 00:22:32.563 | 99.99th=[ 384] 00:22:32.563 bw ( KiB/s): min=81920, max=241664, per=11.44%, avg=140106.25, stdev=44475.61, samples=20 00:22:32.563 iops : min= 320, max= 944, avg=547.25, stdev=173.72, samples=20 00:22:32.563 lat (msec) : 4=0.07%, 10=1.30%, 20=2.24%, 50=7.73%, 100=30.31% 00:22:32.563 lat (msec) : 250=57.32%, 500=1.03% 00:22:32.563 cpu : usr=1.50%, sys=1.77%, ctx=2811, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,5537,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job8: (groupid=0, jobs=1): err= 0: pid=2052287: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=278, BW=69.6MiB/s (72.9MB/s)(709MiB/10191msec); 0 zone resets 00:22:32.563 slat (usec): min=22, max=1722.8k, avg=2645.60, stdev=33992.32 00:22:32.563 clat (msec): min=2, max=2262, avg=227.10, stdev=327.82 00:22:32.563 lat (msec): min=2, max=2296, avg=229.75, stdev=330.45 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 83], 00:22:32.563 | 30.00th=[ 127], 40.00th=[ 144], 50.00th=[ 161], 60.00th=[ 178], 00:22:32.563 | 70.00th=[ 209], 80.00th=[ 239], 90.00th=[ 326], 95.00th=[ 726], 00:22:32.563 | 99.00th=[ 2165], 99.50th=[ 2232], 99.90th=[ 2265], 99.95th=[ 2265], 00:22:32.563 | 99.99th=[ 2265] 00:22:32.563 bw ( KiB/s): min= 7680, max=131584, per=6.44%, avg=78838.00, stdev=42154.93, samples=18 00:22:32.563 iops : min= 30, max= 514, avg=307.94, stdev=164.66, samples=18 00:22:32.563 lat (msec) : 4=0.11%, 10=1.16%, 20=2.64%, 50=8.57%, 100=10.47% 00:22:32.563 lat (msec) : 250=60.40%, 500=10.30%, 750=1.97%, 1000=2.15%, 2000=0.28% 00:22:32.563 lat (msec) : >=2000=1.94% 00:22:32.563 cpu : usr=0.94%, sys=0.93%, ctx=1647, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,2836,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job9: (groupid=0, jobs=1): err= 0: pid=2052288: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=447, BW=112MiB/s (117MB/s)(1138MiB/10180msec); 0 zone resets 00:22:32.563 slat (usec): min=23, max=87612, avg=1191.66, stdev=4328.89 00:22:32.563 clat (msec): min=2, max=473, avg=141.78, stdev=76.81 00:22:32.563 lat (msec): min=2, max=473, avg=142.97, stdev=77.65 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 16], 5.00th=[ 32], 10.00th=[ 42], 20.00th=[ 74], 00:22:32.563 | 30.00th=[ 99], 40.00th=[ 117], 50.00th=[ 138], 60.00th=[ 157], 00:22:32.563 | 70.00th=[ 176], 80.00th=[ 207], 90.00th=[ 241], 95.00th=[ 266], 00:22:32.563 | 99.00th=[ 384], 99.50th=[ 435], 99.90th=[ 464], 99.95th=[ 468], 00:22:32.563 | 99.99th=[ 472] 00:22:32.563 bw ( KiB/s): min=64512, max=167424, per=9.38%, avg=114913.95, stdev=31317.50, samples=20 00:22:32.563 iops : min= 252, max= 654, avg=448.80, stdev=122.24, samples=20 00:22:32.563 lat (msec) : 4=0.02%, 10=0.51%, 20=0.92%, 50=11.95%, 100=17.40% 00:22:32.563 lat (msec) : 250=61.17%, 500=8.04% 00:22:32.563 cpu : usr=1.46%, sys=1.35%, ctx=3120, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,4553,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 job10: (groupid=0, jobs=1): err= 0: pid=2052289: Sun Jul 14 03:10:26 2024 00:22:32.563 write: IOPS=604, BW=151MiB/s (158MB/s)(1539MiB/10188msec); 0 zone resets 00:22:32.563 slat (usec): min=19, max=66842, avg=1084.08, stdev=3170.68 00:22:32.563 clat (usec): min=1857, max=2102.4k, avg=104759.37, stdev=129938.05 00:22:32.563 lat (usec): min=1890, max=2107.6k, avg=105843.46, stdev=130541.57 00:22:32.563 clat percentiles (msec): 00:22:32.563 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 44], 20.00th=[ 56], 00:22:32.563 | 30.00th=[ 60], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 84], 00:22:32.563 | 70.00th=[ 114], 80.00th=[ 157], 90.00th=[ 205], 95.00th=[ 222], 00:22:32.563 | 99.00th=[ 321], 99.50th=[ 414], 99.90th=[ 2089], 99.95th=[ 2106], 00:22:32.563 | 99.99th=[ 2106] 00:22:32.563 bw ( KiB/s): min=57856, max=284672, per=12.73%, avg=155957.85, stdev=75759.92, samples=20 00:22:32.563 iops : min= 226, max= 1112, avg=609.15, stdev=295.85, samples=20 00:22:32.563 lat (msec) : 2=0.02%, 4=0.11%, 10=0.80%, 20=2.57%, 50=9.44% 00:22:32.563 lat (msec) : 100=55.51%, 250=28.59%, 500=2.65%, 2000=0.03%, >=2000=0.29% 00:22:32.563 cpu : usr=1.73%, sys=1.91%, ctx=3540, majf=0, minf=1 00:22:32.563 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:32.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:32.563 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:32.563 issued rwts: total=0,6157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:32.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:32.563 00:22:32.563 Run status group 0 (all jobs): 00:22:32.563 WRITE: bw=1196MiB/s (1254MB/s), 69.6MiB/s-151MiB/s (72.9MB/s-158MB/s), io=11.9GiB (12.8GB), run=10085-10191msec 00:22:32.563 00:22:32.563 Disk stats (read/write): 00:22:32.563 nvme0n1: ios=51/8915, merge=0/0, ticks=521/1208785, in_queue=1209306, util=99.88% 00:22:32.563 nvme10n1: ios=53/8598, merge=0/0, ticks=885/1244360, in_queue=1245245, util=100.00% 00:22:32.563 nvme1n1: ios=42/8084, merge=0/0, ticks=1167/1219519, in_queue=1220686, util=100.00% 00:22:32.563 nvme2n1: ios=46/7981, merge=0/0, ticks=2336/1186786, in_queue=1189122, util=100.00% 00:22:32.563 nvme3n1: ios=0/8228, merge=0/0, ticks=0/1215338, in_queue=1215338, util=97.80% 00:22:32.563 nvme4n1: ios=0/9987, merge=0/0, ticks=0/1244253, in_queue=1244253, util=98.20% 00:22:32.563 nvme5n1: ios=0/6594, merge=0/0, ticks=0/1227805, in_queue=1227805, util=98.29% 00:22:32.563 nvme6n1: ios=46/10845, merge=0/0, ticks=2401/1192593, in_queue=1194994, util=100.00% 00:22:32.563 nvme7n1: ios=37/5650, merge=0/0, ticks=340/1250381, in_queue=1250721, util=100.00% 00:22:32.563 nvme8n1: ios=42/9100, merge=0/0, ticks=2101/1254332, in_queue=1256433, util=100.00% 00:22:32.563 nvme9n1: ios=0/12301, merge=0/0, ticks=0/1245510, in_queue=1245510, util=99.13% 00:22:32.563 03:10:26 -- target/multiconnection.sh@36 -- # sync 00:22:32.563 03:10:26 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:32.563 03:10:26 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.563 03:10:26 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:32.563 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:32.563 03:10:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:32.563 03:10:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:32.563 03:10:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:32.563 03:10:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:32.563 03:10:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:32.563 03:10:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:32.563 03:10:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:32.563 03:10:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.563 03:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.563 03:10:27 -- common/autotest_common.sh@10 -- # set +x 00:22:32.563 03:10:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.564 03:10:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.564 03:10:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:32.564 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:32.564 03:10:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:32.564 03:10:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:32.564 03:10:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:32.564 03:10:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:32.564 03:10:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:32.564 03:10:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:32.564 03:10:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:32.564 03:10:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:32.564 03:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.564 03:10:27 -- common/autotest_common.sh@10 -- # set +x 00:22:32.564 03:10:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.564 03:10:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.564 03:10:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:32.839 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:32.839 03:10:27 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:32.839 03:10:27 -- common/autotest_common.sh@1198 -- # local i=0 00:22:32.839 03:10:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:32.839 03:10:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:32.839 03:10:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:32.839 03:10:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:32.839 03:10:27 -- common/autotest_common.sh@1210 -- # return 0 00:22:32.839 03:10:27 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:32.839 03:10:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:32.839 03:10:27 -- common/autotest_common.sh@10 -- # set +x 00:22:32.839 03:10:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:32.839 03:10:27 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.839 03:10:27 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:33.114 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:33.115 03:10:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:33.115 03:10:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:33.115 03:10:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:33.115 03:10:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:33.115 03:10:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:33.115 03:10:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:33.115 03:10:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:33.115 03:10:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:33.115 03:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.115 03:10:28 -- common/autotest_common.sh@10 -- # set +x 00:22:33.115 03:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.115 03:10:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.115 03:10:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:33.373 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:33.373 03:10:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:33.373 03:10:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:33.373 03:10:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:33.373 03:10:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:33.373 03:10:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:33.373 03:10:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:33.373 03:10:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:33.373 03:10:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:33.373 03:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.373 03:10:28 -- common/autotest_common.sh@10 -- # set +x 00:22:33.373 03:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.373 03:10:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.373 03:10:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:33.631 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:33.631 03:10:28 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:33.631 03:10:28 -- common/autotest_common.sh@1198 -- # local i=0 00:22:33.631 03:10:28 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:33.631 03:10:28 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:33.631 03:10:28 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:33.631 03:10:28 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:33.631 03:10:28 -- common/autotest_common.sh@1210 -- # return 0 00:22:33.631 03:10:28 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:33.631 03:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.631 03:10:28 -- common/autotest_common.sh@10 -- # set +x 00:22:33.631 03:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.631 03:10:28 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.631 03:10:28 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:33.890 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:33.890 03:10:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:33.890 03:10:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:33.890 03:10:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:33.890 03:10:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:33.890 03:10:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:33.890 03:10:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:33.890 03:10:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:33.890 03:10:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:33.890 03:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.890 03:10:29 -- common/autotest_common.sh@10 -- # set +x 00:22:33.890 03:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.890 03:10:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.890 03:10:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:34.148 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:34.148 03:10:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:34.148 03:10:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.148 03:10:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:34.148 03:10:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:34.148 03:10:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:34.148 03:10:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:34.148 03:10:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:34.148 03:10:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:34.148 03:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.148 03:10:29 -- common/autotest_common.sh@10 -- # set +x 00:22:34.148 03:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.148 03:10:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.148 03:10:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:34.148 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:34.148 03:10:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:34.148 03:10:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.148 03:10:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:34.148 03:10:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:34.148 03:10:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:34.148 03:10:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:34.407 03:10:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:34.407 03:10:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:34.407 03:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.407 03:10:29 -- common/autotest_common.sh@10 -- # set +x 00:22:34.407 03:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.407 03:10:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.407 03:10:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:34.407 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:34.407 03:10:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:34.407 03:10:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.407 03:10:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:34.407 03:10:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:34.407 03:10:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:34.407 03:10:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:34.407 03:10:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:34.407 03:10:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:34.407 03:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.407 03:10:29 -- common/autotest_common.sh@10 -- # set +x 00:22:34.407 03:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.407 03:10:29 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.407 03:10:29 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:34.407 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:34.407 03:10:29 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:34.407 03:10:29 -- common/autotest_common.sh@1198 -- # local i=0 00:22:34.407 03:10:29 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:34.407 03:10:29 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:34.407 03:10:29 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:34.407 03:10:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:34.407 03:10:29 -- common/autotest_common.sh@1210 -- # return 0 00:22:34.407 03:10:29 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:34.407 03:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:34.407 03:10:29 -- common/autotest_common.sh@10 -- # set +x 00:22:34.407 03:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:34.407 03:10:29 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:34.407 03:10:29 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:34.407 03:10:29 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:34.407 03:10:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:34.407 03:10:29 -- nvmf/common.sh@116 -- # sync 00:22:34.407 03:10:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:34.407 03:10:29 -- nvmf/common.sh@119 -- # set +e 00:22:34.407 03:10:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:34.407 03:10:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:34.407 rmmod nvme_tcp 00:22:34.407 rmmod nvme_fabrics 00:22:34.407 rmmod nvme_keyring 00:22:34.666 03:10:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:34.666 03:10:29 -- nvmf/common.sh@123 -- # set -e 00:22:34.666 03:10:29 -- nvmf/common.sh@124 -- # return 0 00:22:34.666 03:10:29 -- nvmf/common.sh@477 -- # '[' -n 2046569 ']' 00:22:34.666 03:10:29 -- nvmf/common.sh@478 -- # killprocess 2046569 00:22:34.666 03:10:29 -- common/autotest_common.sh@926 -- # '[' -z 2046569 ']' 00:22:34.666 03:10:29 -- common/autotest_common.sh@930 -- # kill -0 2046569 00:22:34.666 03:10:29 -- common/autotest_common.sh@931 -- # uname 00:22:34.666 03:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:34.666 03:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2046569 00:22:34.666 03:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:34.666 03:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:34.666 03:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2046569' 00:22:34.666 killing process with pid 2046569 00:22:34.666 03:10:29 -- common/autotest_common.sh@945 -- # kill 2046569 00:22:34.666 03:10:29 -- common/autotest_common.sh@950 -- # wait 2046569 00:22:35.233 03:10:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:35.233 03:10:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:35.233 03:10:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:35.233 03:10:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:35.233 03:10:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:35.233 03:10:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:35.233 03:10:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:35.233 03:10:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.145 03:10:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:37.145 00:22:37.145 real 1m2.057s 00:22:37.145 user 3m25.527s 00:22:37.145 sys 0m22.722s 00:22:37.145 03:10:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:37.145 03:10:32 -- common/autotest_common.sh@10 -- # set +x 00:22:37.145 ************************************ 00:22:37.145 END TEST nvmf_multiconnection 00:22:37.145 ************************************ 00:22:37.145 03:10:32 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:37.145 03:10:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:37.145 03:10:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:37.145 03:10:32 -- common/autotest_common.sh@10 -- # set +x 00:22:37.145 ************************************ 00:22:37.145 START TEST nvmf_initiator_timeout 00:22:37.145 ************************************ 00:22:37.145 03:10:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:37.145 * Looking for test storage... 00:22:37.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:37.145 03:10:32 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:37.145 03:10:32 -- nvmf/common.sh@7 -- # uname -s 00:22:37.145 03:10:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:37.145 03:10:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:37.145 03:10:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:37.145 03:10:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:37.145 03:10:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:37.145 03:10:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:37.145 03:10:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:37.145 03:10:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:37.145 03:10:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:37.145 03:10:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:37.146 03:10:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.146 03:10:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:37.146 03:10:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:37.146 03:10:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:37.146 03:10:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:37.146 03:10:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:37.146 03:10:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:37.146 03:10:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:37.146 03:10:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:37.146 03:10:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.146 03:10:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.146 03:10:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.146 03:10:32 -- paths/export.sh@5 -- # export PATH 00:22:37.146 03:10:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:37.146 03:10:32 -- nvmf/common.sh@46 -- # : 0 00:22:37.146 03:10:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:37.146 03:10:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:37.146 03:10:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:37.146 03:10:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:37.146 03:10:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:37.146 03:10:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:37.146 03:10:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:37.146 03:10:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:37.146 03:10:32 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:37.146 03:10:32 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:37.146 03:10:32 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:37.146 03:10:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:37.146 03:10:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:37.146 03:10:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:37.146 03:10:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:37.146 03:10:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:37.146 03:10:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.146 03:10:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.146 03:10:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:37.146 03:10:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:37.146 03:10:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:37.146 03:10:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:37.146 03:10:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.046 03:10:34 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:39.046 03:10:34 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:39.046 03:10:34 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:39.046 03:10:34 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:39.046 03:10:34 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:39.046 03:10:34 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:39.046 03:10:34 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:39.046 03:10:34 -- nvmf/common.sh@294 -- # net_devs=() 00:22:39.046 03:10:34 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:39.046 03:10:34 -- nvmf/common.sh@295 -- # e810=() 00:22:39.046 03:10:34 -- nvmf/common.sh@295 -- # local -ga e810 00:22:39.046 03:10:34 -- nvmf/common.sh@296 -- # x722=() 00:22:39.046 03:10:34 -- nvmf/common.sh@296 -- # local -ga x722 00:22:39.046 03:10:34 -- nvmf/common.sh@297 -- # mlx=() 00:22:39.046 03:10:34 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:39.046 03:10:34 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:39.046 03:10:34 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:39.046 03:10:34 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:39.046 03:10:34 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:39.046 03:10:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:39.046 03:10:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:39.046 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:39.046 03:10:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:39.046 03:10:34 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:39.046 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:39.046 03:10:34 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:39.046 03:10:34 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:39.047 03:10:34 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:39.047 03:10:34 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:39.047 03:10:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:39.047 03:10:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.047 03:10:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:39.047 03:10:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.047 03:10:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:39.047 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:39.047 03:10:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.047 03:10:34 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:39.047 03:10:34 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:39.047 03:10:34 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:39.047 03:10:34 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:39.047 03:10:34 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:39.047 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:39.047 03:10:34 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:39.047 03:10:34 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:39.047 03:10:34 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:39.047 03:10:34 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:39.047 03:10:34 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:39.047 03:10:34 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:39.047 03:10:34 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:39.047 03:10:34 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:39.047 03:10:34 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:39.047 03:10:34 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:39.047 03:10:34 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:39.047 03:10:34 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:39.047 03:10:34 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:39.047 03:10:34 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:39.047 03:10:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:39.047 03:10:34 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:39.047 03:10:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:39.047 03:10:34 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:39.305 03:10:34 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:39.305 03:10:34 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:39.305 03:10:34 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:39.305 03:10:34 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:39.305 03:10:34 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:39.305 03:10:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:39.305 03:10:34 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:39.305 03:10:34 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:39.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:39.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:22:39.305 00:22:39.305 --- 10.0.0.2 ping statistics --- 00:22:39.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.305 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:39.305 03:10:34 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:39.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:39.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:22:39.305 00:22:39.305 --- 10.0.0.1 ping statistics --- 00:22:39.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:39.305 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:22:39.305 03:10:34 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:39.305 03:10:34 -- nvmf/common.sh@410 -- # return 0 00:22:39.305 03:10:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:39.305 03:10:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:39.305 03:10:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:39.305 03:10:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:39.305 03:10:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:39.305 03:10:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:39.305 03:10:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:39.305 03:10:34 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:39.305 03:10:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:39.305 03:10:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:39.305 03:10:34 -- common/autotest_common.sh@10 -- # set +x 00:22:39.305 03:10:34 -- nvmf/common.sh@469 -- # nvmfpid=2055674 00:22:39.305 03:10:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:39.305 03:10:34 -- nvmf/common.sh@470 -- # waitforlisten 2055674 00:22:39.305 03:10:34 -- common/autotest_common.sh@819 -- # '[' -z 2055674 ']' 00:22:39.305 03:10:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.305 03:10:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:39.305 03:10:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.305 03:10:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:39.305 03:10:34 -- common/autotest_common.sh@10 -- # set +x 00:22:39.305 [2024-07-14 03:10:34.493825] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:22:39.305 [2024-07-14 03:10:34.493910] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.305 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.564 [2024-07-14 03:10:34.560962] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:39.564 [2024-07-14 03:10:34.648144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:39.564 [2024-07-14 03:10:34.648298] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.564 [2024-07-14 03:10:34.648315] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.564 [2024-07-14 03:10:34.648328] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.564 [2024-07-14 03:10:34.648399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.564 [2024-07-14 03:10:34.648483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.564 [2024-07-14 03:10:34.648485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.564 [2024-07-14 03:10:34.648422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.499 03:10:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:40.499 03:10:35 -- common/autotest_common.sh@852 -- # return 0 00:22:40.499 03:10:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:40.499 03:10:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 03:10:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 Malloc0 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 Delay0 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 [2024-07-14 03:10:35.498453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.499 03:10:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:40.499 03:10:35 -- common/autotest_common.sh@10 -- # set +x 00:22:40.499 [2024-07-14 03:10:35.526702] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.499 03:10:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:40.499 03:10:35 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:41.067 03:10:36 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:41.067 03:10:36 -- common/autotest_common.sh@1177 -- # local i=0 00:22:41.067 03:10:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:41.067 03:10:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:41.067 03:10:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:42.960 03:10:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:42.960 03:10:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:42.960 03:10:38 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:22:42.960 03:10:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:42.960 03:10:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:42.960 03:10:38 -- common/autotest_common.sh@1187 -- # return 0 00:22:42.960 03:10:38 -- target/initiator_timeout.sh@35 -- # fio_pid=2056122 00:22:42.960 03:10:38 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:42.961 03:10:38 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:42.961 [global] 00:22:42.961 thread=1 00:22:42.961 invalidate=1 00:22:42.961 rw=write 00:22:42.961 time_based=1 00:22:42.961 runtime=60 00:22:42.961 ioengine=libaio 00:22:42.961 direct=1 00:22:42.961 bs=4096 00:22:42.961 iodepth=1 00:22:42.961 norandommap=0 00:22:42.961 numjobs=1 00:22:42.961 00:22:42.961 verify_dump=1 00:22:42.961 verify_backlog=512 00:22:42.961 verify_state_save=0 00:22:42.961 do_verify=1 00:22:42.961 verify=crc32c-intel 00:22:42.961 [job0] 00:22:42.961 filename=/dev/nvme0n1 00:22:42.961 Could not set queue depth (nvme0n1) 00:22:43.216 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:43.216 fio-3.35 00:22:43.216 Starting 1 thread 00:22:46.490 03:10:41 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:46.490 03:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.490 03:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:46.490 true 00:22:46.490 03:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.490 03:10:41 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:46.490 03:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.490 03:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:46.490 true 00:22:46.490 03:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.490 03:10:41 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:46.490 03:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.490 03:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:46.490 true 00:22:46.490 03:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.490 03:10:41 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:46.490 03:10:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:46.490 03:10:41 -- common/autotest_common.sh@10 -- # set +x 00:22:46.490 true 00:22:46.490 03:10:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:46.490 03:10:41 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:49.013 03:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.013 03:10:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.013 true 00:22:49.013 03:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:49.013 03:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.013 03:10:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.013 true 00:22:49.013 03:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:49.013 03:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.013 03:10:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.013 true 00:22:49.013 03:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:49.013 03:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:49.013 03:10:44 -- common/autotest_common.sh@10 -- # set +x 00:22:49.013 true 00:22:49.013 03:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:49.013 03:10:44 -- target/initiator_timeout.sh@54 -- # wait 2056122 00:23:45.250 00:23:45.250 job0: (groupid=0, jobs=1): err= 0: pid=2056191: Sun Jul 14 03:11:38 2024 00:23:45.250 read: IOPS=50, BW=203KiB/s (208kB/s)(11.9MiB/60012msec) 00:23:45.250 slat (usec): min=5, max=8792, avg=20.31, stdev=159.38 00:23:45.250 clat (usec): min=376, max=41265k, avg=19310.75, stdev=748319.86 00:23:45.250 lat (usec): min=390, max=41265k, avg=19331.06, stdev=748319.97 00:23:45.250 clat percentiles (usec): 00:23:45.250 | 1.00th=[ 482], 5.00th=[ 506], 10.00th=[ 515], 00:23:45.250 | 20.00th=[ 523], 30.00th=[ 529], 40.00th=[ 537], 00:23:45.250 | 50.00th=[ 545], 60.00th=[ 553], 70.00th=[ 570], 00:23:45.250 | 80.00th=[ 594], 90.00th=[ 41157], 95.00th=[ 41157], 00:23:45.250 | 99.00th=[ 41157], 99.50th=[ 42206], 99.90th=[ 42206], 00:23:45.250 | 99.95th=[ 44827], 99.99th=[17112761] 00:23:45.250 write: IOPS=51, BW=205KiB/s (210kB/s)(12.0MiB/60012msec); 0 zone resets 00:23:45.250 slat (nsec): min=7356, max=78772, avg=26032.23, stdev=13813.54 00:23:45.250 clat (usec): min=238, max=582, avg=361.59, stdev=61.13 00:23:45.250 lat (usec): min=246, max=622, avg=387.63, stdev=70.46 00:23:45.250 clat percentiles (usec): 00:23:45.250 | 1.00th=[ 245], 5.00th=[ 260], 10.00th=[ 277], 20.00th=[ 306], 00:23:45.250 | 30.00th=[ 326], 40.00th=[ 343], 50.00th=[ 359], 60.00th=[ 379], 00:23:45.250 | 70.00th=[ 404], 80.00th=[ 424], 90.00th=[ 441], 95.00th=[ 453], 00:23:45.250 | 99.00th=[ 486], 99.50th=[ 502], 99.90th=[ 545], 99.95th=[ 562], 00:23:45.250 | 99.99th=[ 586] 00:23:45.250 bw ( KiB/s): min= 3960, max= 4232, per=100.00%, avg=4096.00, stdev=86.01, samples=6 00:23:45.250 iops : min= 990, max= 1058, avg=1024.00, stdev=21.50, samples=6 00:23:45.250 lat (usec) : 250=1.16%, 500=50.45%, 750=41.91%, 1000=0.03% 00:23:45.250 lat (msec) : 2=0.05%, 50=6.38%, >=2000=0.02% 00:23:45.250 cpu : usr=0.17%, sys=0.28%, ctx=6115, majf=0, minf=2 00:23:45.250 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:45.250 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.250 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:45.250 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:45.250 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:45.250 00:23:45.250 Run status group 0 (all jobs): 00:23:45.250 READ: bw=203KiB/s (208kB/s), 203KiB/s-203KiB/s (208kB/s-208kB/s), io=11.9MiB (12.5MB), run=60012-60012msec 00:23:45.250 WRITE: bw=205KiB/s (210kB/s), 205KiB/s-205KiB/s (210kB/s-210kB/s), io=12.0MiB (12.6MB), run=60012-60012msec 00:23:45.250 00:23:45.250 Disk stats (read/write): 00:23:45.250 nvme0n1: ios=3137/3072, merge=0/0, ticks=17426/955, in_queue=18381, util=99.80% 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:45.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:45.250 03:11:38 -- common/autotest_common.sh@1198 -- # local i=0 00:23:45.250 03:11:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:23:45.250 03:11:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:45.250 03:11:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:23:45.250 03:11:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:45.250 03:11:38 -- common/autotest_common.sh@1210 -- # return 0 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:45.250 nvmf hotplug test: fio successful as expected 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.250 03:11:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:45.250 03:11:38 -- common/autotest_common.sh@10 -- # set +x 00:23:45.250 03:11:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:45.250 03:11:38 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:45.250 03:11:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:45.250 03:11:38 -- nvmf/common.sh@116 -- # sync 00:23:45.250 03:11:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:45.250 03:11:38 -- nvmf/common.sh@119 -- # set +e 00:23:45.250 03:11:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:45.250 03:11:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:45.250 rmmod nvme_tcp 00:23:45.250 rmmod nvme_fabrics 00:23:45.250 rmmod nvme_keyring 00:23:45.250 03:11:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:45.250 03:11:38 -- nvmf/common.sh@123 -- # set -e 00:23:45.250 03:11:38 -- nvmf/common.sh@124 -- # return 0 00:23:45.250 03:11:38 -- nvmf/common.sh@477 -- # '[' -n 2055674 ']' 00:23:45.250 03:11:38 -- nvmf/common.sh@478 -- # killprocess 2055674 00:23:45.250 03:11:38 -- common/autotest_common.sh@926 -- # '[' -z 2055674 ']' 00:23:45.250 03:11:38 -- common/autotest_common.sh@930 -- # kill -0 2055674 00:23:45.250 03:11:38 -- common/autotest_common.sh@931 -- # uname 00:23:45.250 03:11:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:45.251 03:11:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2055674 00:23:45.251 03:11:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:45.251 03:11:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:45.251 03:11:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2055674' 00:23:45.251 killing process with pid 2055674 00:23:45.251 03:11:38 -- common/autotest_common.sh@945 -- # kill 2055674 00:23:45.251 03:11:38 -- common/autotest_common.sh@950 -- # wait 2055674 00:23:45.251 03:11:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:45.251 03:11:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:45.251 03:11:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:45.251 03:11:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.251 03:11:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:45.251 03:11:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.251 03:11:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:45.251 03:11:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.816 03:11:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:45.816 00:23:45.816 real 1m8.655s 00:23:45.816 user 4m11.944s 00:23:45.816 sys 0m6.904s 00:23:45.816 03:11:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:45.816 03:11:40 -- common/autotest_common.sh@10 -- # set +x 00:23:45.816 ************************************ 00:23:45.816 END TEST nvmf_initiator_timeout 00:23:45.816 ************************************ 00:23:45.816 03:11:40 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:45.816 03:11:40 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:23:45.816 03:11:40 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:23:45.816 03:11:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:45.816 03:11:40 -- common/autotest_common.sh@10 -- # set +x 00:23:47.725 03:11:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:47.725 03:11:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:47.725 03:11:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:47.725 03:11:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:47.725 03:11:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:47.725 03:11:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:47.725 03:11:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:47.725 03:11:42 -- nvmf/common.sh@294 -- # net_devs=() 00:23:47.725 03:11:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:47.725 03:11:42 -- nvmf/common.sh@295 -- # e810=() 00:23:47.725 03:11:42 -- nvmf/common.sh@295 -- # local -ga e810 00:23:47.725 03:11:42 -- nvmf/common.sh@296 -- # x722=() 00:23:47.725 03:11:42 -- nvmf/common.sh@296 -- # local -ga x722 00:23:47.725 03:11:42 -- nvmf/common.sh@297 -- # mlx=() 00:23:47.725 03:11:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:47.725 03:11:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.725 03:11:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:47.725 03:11:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:47.725 03:11:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:47.725 03:11:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.725 03:11:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:47.725 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:47.725 03:11:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.725 03:11:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:47.725 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:47.725 03:11:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:47.725 03:11:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:47.725 03:11:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.725 03:11:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.725 03:11:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.725 03:11:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.725 03:11:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:47.725 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:47.725 03:11:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.725 03:11:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.725 03:11:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.725 03:11:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.725 03:11:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.725 03:11:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:47.725 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:47.725 03:11:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.725 03:11:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:47.725 03:11:42 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.725 03:11:42 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:23:47.725 03:11:42 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:47.725 03:11:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:47.725 03:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:47.725 03:11:42 -- common/autotest_common.sh@10 -- # set +x 00:23:47.725 ************************************ 00:23:47.725 START TEST nvmf_perf_adq 00:23:47.725 ************************************ 00:23:47.725 03:11:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:47.983 * Looking for test storage... 00:23:47.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:47.983 03:11:42 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.983 03:11:42 -- nvmf/common.sh@7 -- # uname -s 00:23:47.983 03:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.983 03:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.983 03:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.983 03:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.983 03:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.983 03:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.983 03:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.983 03:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.983 03:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.983 03:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.983 03:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.983 03:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:47.983 03:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.983 03:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.983 03:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.983 03:11:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.983 03:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.983 03:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.983 03:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.983 03:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.983 03:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.983 03:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.983 03:11:42 -- paths/export.sh@5 -- # export PATH 00:23:47.983 03:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.983 03:11:42 -- nvmf/common.sh@46 -- # : 0 00:23:47.983 03:11:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:47.983 03:11:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:47.983 03:11:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:47.983 03:11:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.983 03:11:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.983 03:11:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:47.983 03:11:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:47.983 03:11:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:47.984 03:11:42 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:47.984 03:11:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:47.984 03:11:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.885 03:11:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:49.885 03:11:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:49.885 03:11:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:49.885 03:11:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:49.885 03:11:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:49.885 03:11:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:49.885 03:11:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:49.885 03:11:44 -- nvmf/common.sh@294 -- # net_devs=() 00:23:49.885 03:11:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:49.885 03:11:44 -- nvmf/common.sh@295 -- # e810=() 00:23:49.885 03:11:44 -- nvmf/common.sh@295 -- # local -ga e810 00:23:49.885 03:11:44 -- nvmf/common.sh@296 -- # x722=() 00:23:49.885 03:11:44 -- nvmf/common.sh@296 -- # local -ga x722 00:23:49.885 03:11:44 -- nvmf/common.sh@297 -- # mlx=() 00:23:49.885 03:11:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:49.885 03:11:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.885 03:11:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:49.885 03:11:44 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:49.885 03:11:44 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:49.885 03:11:44 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:49.885 03:11:44 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:49.885 03:11:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:49.886 03:11:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.886 03:11:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:49.886 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:49.886 03:11:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:49.886 03:11:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:49.886 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:49.886 03:11:44 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:49.886 03:11:44 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:49.886 03:11:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.886 03:11:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.886 03:11:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.886 03:11:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.886 03:11:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:49.886 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:49.886 03:11:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.886 03:11:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:49.886 03:11:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.886 03:11:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:49.886 03:11:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.886 03:11:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:49.886 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:49.886 03:11:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.886 03:11:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:49.886 03:11:44 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.886 03:11:44 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:49.886 03:11:44 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:49.886 03:11:44 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:23:49.886 03:11:44 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:50.451 03:11:45 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:52.349 03:11:47 -- target/perf_adq.sh@54 -- # sleep 5 00:23:57.609 03:11:52 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:57.609 03:11:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:57.609 03:11:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.609 03:11:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:57.609 03:11:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:57.609 03:11:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:57.609 03:11:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.609 03:11:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.609 03:11:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.609 03:11:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:57.609 03:11:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:57.609 03:11:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:57.609 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.609 03:11:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.609 03:11:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.610 03:11:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.610 03:11:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.610 03:11:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.610 03:11:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.610 03:11:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.610 03:11:52 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.610 03:11:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.610 03:11:52 -- nvmf/common.sh@295 -- # e810=() 00:23:57.610 03:11:52 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.610 03:11:52 -- nvmf/common.sh@296 -- # x722=() 00:23:57.610 03:11:52 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.610 03:11:52 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.610 03:11:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.610 03:11:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.610 03:11:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.610 03:11:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:57.610 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:57.610 03:11:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.610 03:11:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:57.610 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:57.610 03:11:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.610 03:11:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.610 03:11:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.610 03:11:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:57.610 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:57.610 03:11:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.610 03:11:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.610 03:11:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.610 03:11:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:57.610 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:57.610 03:11:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:57.610 03:11:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:57.610 03:11:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.610 03:11:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.610 03:11:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:57.610 03:11:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.610 03:11:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.610 03:11:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:57.610 03:11:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.610 03:11:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.610 03:11:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:57.610 03:11:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:57.610 03:11:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.610 03:11:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.610 03:11:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.610 03:11:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.610 03:11:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:57.610 03:11:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.610 03:11:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.610 03:11:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.610 03:11:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:57.610 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.610 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:23:57.610 00:23:57.610 --- 10.0.0.2 ping statistics --- 00:23:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.610 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:23:57.610 03:11:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:23:57.610 00:23:57.610 --- 10.0.0.1 ping statistics --- 00:23:57.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.610 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:23:57.610 03:11:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.610 03:11:52 -- nvmf/common.sh@410 -- # return 0 00:23:57.610 03:11:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.610 03:11:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.610 03:11:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:57.610 03:11:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.610 03:11:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:57.610 03:11:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.610 03:11:52 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:57.610 03:11:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.610 03:11:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.610 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 03:11:52 -- nvmf/common.sh@469 -- # nvmfpid=2067982 00:23:57.610 03:11:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:57.610 03:11:52 -- nvmf/common.sh@470 -- # waitforlisten 2067982 00:23:57.610 03:11:52 -- common/autotest_common.sh@819 -- # '[' -z 2067982 ']' 00:23:57.610 03:11:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.610 03:11:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.610 03:11:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.610 03:11:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.610 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 [2024-07-14 03:11:52.588795] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:23:57.610 [2024-07-14 03:11:52.588888] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.610 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.610 [2024-07-14 03:11:52.658213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.610 [2024-07-14 03:11:52.748477] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.610 [2024-07-14 03:11:52.748630] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.610 [2024-07-14 03:11:52.748646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.610 [2024-07-14 03:11:52.748659] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.610 [2024-07-14 03:11:52.748711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.610 [2024-07-14 03:11:52.748749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.610 [2024-07-14 03:11:52.748808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.610 [2024-07-14 03:11:52.748810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.610 03:11:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:57.610 03:11:52 -- common/autotest_common.sh@852 -- # return 0 00:23:57.610 03:11:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:57.610 03:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:57.610 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 03:11:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.610 03:11:52 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:57.610 03:11:52 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:57.610 03:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.610 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.610 03:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.610 03:11:52 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:57.610 03:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.610 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.869 03:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.869 03:11:52 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:57.869 03:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.869 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.869 [2024-07-14 03:11:52.966788] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.869 03:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.869 03:11:52 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:57.869 03:11:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.869 03:11:52 -- common/autotest_common.sh@10 -- # set +x 00:23:57.869 Malloc1 00:23:57.869 03:11:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.869 03:11:52 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.869 03:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.869 03:11:53 -- common/autotest_common.sh@10 -- # set +x 00:23:57.869 03:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.870 03:11:53 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:57.870 03:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.870 03:11:53 -- common/autotest_common.sh@10 -- # set +x 00:23:57.870 03:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.870 03:11:53 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:57.870 03:11:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:57.870 03:11:53 -- common/autotest_common.sh@10 -- # set +x 00:23:57.870 [2024-07-14 03:11:53.020032] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.870 03:11:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:57.870 03:11:53 -- target/perf_adq.sh@73 -- # perfpid=2068130 00:23:57.870 03:11:53 -- target/perf_adq.sh@74 -- # sleep 2 00:23:57.870 03:11:53 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:57.870 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.802 03:11:55 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:59.802 03:11:55 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:59.802 03:11:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:59.802 03:11:55 -- target/perf_adq.sh@76 -- # wc -l 00:23:59.802 03:11:55 -- common/autotest_common.sh@10 -- # set +x 00:23:59.802 03:11:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:00.061 03:11:55 -- target/perf_adq.sh@76 -- # count=4 00:24:00.061 03:11:55 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:00.061 03:11:55 -- target/perf_adq.sh@81 -- # wait 2068130 00:24:08.167 Initializing NVMe Controllers 00:24:08.167 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:08.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:08.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:08.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:08.167 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:08.167 Initialization complete. Launching workers. 00:24:08.167 ======================================================== 00:24:08.167 Latency(us) 00:24:08.167 Device Information : IOPS MiB/s Average min max 00:24:08.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10698.60 41.79 5983.52 1557.67 10202.38 00:24:08.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11375.60 44.44 5626.44 1037.06 8332.98 00:24:08.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10680.90 41.72 5993.94 1605.90 9800.33 00:24:08.167 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12079.20 47.18 5298.43 1317.77 8402.89 00:24:08.167 ======================================================== 00:24:08.167 Total : 44834.28 175.13 5710.83 1037.06 10202.38 00:24:08.167 00:24:08.167 03:12:03 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:08.167 03:12:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:08.167 03:12:03 -- nvmf/common.sh@116 -- # sync 00:24:08.167 03:12:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:08.167 03:12:03 -- nvmf/common.sh@119 -- # set +e 00:24:08.167 03:12:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:08.167 03:12:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:08.167 rmmod nvme_tcp 00:24:08.167 rmmod nvme_fabrics 00:24:08.167 rmmod nvme_keyring 00:24:08.167 03:12:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:08.167 03:12:03 -- nvmf/common.sh@123 -- # set -e 00:24:08.167 03:12:03 -- nvmf/common.sh@124 -- # return 0 00:24:08.167 03:12:03 -- nvmf/common.sh@477 -- # '[' -n 2067982 ']' 00:24:08.167 03:12:03 -- nvmf/common.sh@478 -- # killprocess 2067982 00:24:08.167 03:12:03 -- common/autotest_common.sh@926 -- # '[' -z 2067982 ']' 00:24:08.167 03:12:03 -- common/autotest_common.sh@930 -- # kill -0 2067982 00:24:08.167 03:12:03 -- common/autotest_common.sh@931 -- # uname 00:24:08.167 03:12:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:08.167 03:12:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2067982 00:24:08.167 03:12:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:08.167 03:12:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:08.167 03:12:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2067982' 00:24:08.167 killing process with pid 2067982 00:24:08.167 03:12:03 -- common/autotest_common.sh@945 -- # kill 2067982 00:24:08.167 03:12:03 -- common/autotest_common.sh@950 -- # wait 2067982 00:24:08.425 03:12:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:08.425 03:12:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:08.425 03:12:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:08.425 03:12:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:08.425 03:12:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:08.425 03:12:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:08.425 03:12:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:08.425 03:12:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.328 03:12:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:10.328 03:12:05 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:10.328 03:12:05 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:11.273 03:12:06 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:13.175 03:12:08 -- target/perf_adq.sh@54 -- # sleep 5 00:24:18.444 03:12:13 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:18.444 03:12:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:18.444 03:12:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.444 03:12:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:18.444 03:12:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:18.444 03:12:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:18.444 03:12:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.444 03:12:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.445 03:12:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.445 03:12:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:18.445 03:12:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:18.445 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.445 03:12:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:18.445 03:12:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:18.445 03:12:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:18.445 03:12:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:18.445 03:12:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:18.445 03:12:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:18.445 03:12:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:18.445 03:12:13 -- nvmf/common.sh@294 -- # net_devs=() 00:24:18.445 03:12:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:18.445 03:12:13 -- nvmf/common.sh@295 -- # e810=() 00:24:18.445 03:12:13 -- nvmf/common.sh@295 -- # local -ga e810 00:24:18.445 03:12:13 -- nvmf/common.sh@296 -- # x722=() 00:24:18.445 03:12:13 -- nvmf/common.sh@296 -- # local -ga x722 00:24:18.445 03:12:13 -- nvmf/common.sh@297 -- # mlx=() 00:24:18.445 03:12:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:18.445 03:12:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:18.445 03:12:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.445 03:12:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:18.445 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:18.445 03:12:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:18.445 03:12:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:18.445 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:18.445 03:12:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.445 03:12:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.445 03:12:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.445 03:12:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:18.445 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:18.445 03:12:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:18.445 03:12:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:18.445 03:12:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:18.445 03:12:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:18.445 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:18.445 03:12:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:18.445 03:12:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:18.445 03:12:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:18.445 03:12:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:18.445 03:12:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:18.445 03:12:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:18.445 03:12:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:18.445 03:12:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:18.445 03:12:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:18.445 03:12:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:18.445 03:12:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:18.445 03:12:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:18.445 03:12:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:18.445 03:12:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:18.445 03:12:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:18.445 03:12:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:18.445 03:12:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:18.445 03:12:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:18.445 03:12:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:18.445 03:12:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:18.445 03:12:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:18.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:18.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:24:18.445 00:24:18.445 --- 10.0.0.2 ping statistics --- 00:24:18.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.445 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:24:18.445 03:12:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:18.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:18.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:18.445 00:24:18.445 --- 10.0.0.1 ping statistics --- 00:24:18.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:18.445 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:18.445 03:12:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:18.445 03:12:13 -- nvmf/common.sh@410 -- # return 0 00:24:18.445 03:12:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:18.445 03:12:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:18.445 03:12:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:18.445 03:12:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:18.445 03:12:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:18.445 03:12:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:18.445 03:12:13 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:18.445 03:12:13 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:18.445 03:12:13 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:18.445 03:12:13 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:18.445 net.core.busy_poll = 1 00:24:18.445 03:12:13 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:18.445 net.core.busy_read = 1 00:24:18.445 03:12:13 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:18.445 03:12:13 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:18.445 03:12:13 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:18.445 03:12:13 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:18.445 03:12:13 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:18.445 03:12:13 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:18.445 03:12:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:18.445 03:12:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:18.445 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.445 03:12:13 -- nvmf/common.sh@469 -- # nvmfpid=2071436 00:24:18.445 03:12:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:18.445 03:12:13 -- nvmf/common.sh@470 -- # waitforlisten 2071436 00:24:18.445 03:12:13 -- common/autotest_common.sh@819 -- # '[' -z 2071436 ']' 00:24:18.445 03:12:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:18.445 03:12:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:18.445 03:12:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:18.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:18.445 03:12:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:18.445 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.445 [2024-07-14 03:12:13.478822] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:18.445 [2024-07-14 03:12:13.478934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:18.445 EAL: No free 2048 kB hugepages reported on node 1 00:24:18.445 [2024-07-14 03:12:13.544227] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:18.445 [2024-07-14 03:12:13.630280] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:18.445 [2024-07-14 03:12:13.630427] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:18.445 [2024-07-14 03:12:13.630444] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:18.445 [2024-07-14 03:12:13.630456] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:18.445 [2024-07-14 03:12:13.630523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:18.445 [2024-07-14 03:12:13.630585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.445 [2024-07-14 03:12:13.630650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:18.445 [2024-07-14 03:12:13.630653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:18.445 03:12:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:18.445 03:12:13 -- common/autotest_common.sh@852 -- # return 0 00:24:18.445 03:12:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:18.445 03:12:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:18.445 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 03:12:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.703 03:12:13 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:18.703 03:12:13 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 [2024-07-14 03:12:13.833730] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 Malloc1 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:18.703 03:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:18.703 03:12:13 -- common/autotest_common.sh@10 -- # set +x 00:24:18.703 [2024-07-14 03:12:13.886813] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:18.703 03:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:18.703 03:12:13 -- target/perf_adq.sh@94 -- # perfpid=2071463 00:24:18.703 03:12:13 -- target/perf_adq.sh@95 -- # sleep 2 00:24:18.703 03:12:13 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:18.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.233 03:12:15 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:21.233 03:12:15 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:21.233 03:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.233 03:12:15 -- target/perf_adq.sh@97 -- # wc -l 00:24:21.233 03:12:15 -- common/autotest_common.sh@10 -- # set +x 00:24:21.233 03:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.233 03:12:15 -- target/perf_adq.sh@97 -- # count=2 00:24:21.233 03:12:15 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:21.233 03:12:15 -- target/perf_adq.sh@103 -- # wait 2071463 00:24:29.386 Initializing NVMe Controllers 00:24:29.386 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:29.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:29.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:29.386 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:29.386 Initialization complete. Launching workers. 00:24:29.386 ======================================================== 00:24:29.386 Latency(us) 00:24:29.386 Device Information : IOPS MiB/s Average min max 00:24:29.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6235.10 24.36 10272.55 1925.35 53807.93 00:24:29.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6382.20 24.93 10034.75 1764.88 52834.41 00:24:29.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7960.90 31.10 8039.12 1298.22 52713.20 00:24:29.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8539.60 33.36 7494.69 1457.62 53046.14 00:24:29.387 ======================================================== 00:24:29.387 Total : 29117.79 113.74 8795.12 1298.22 53807.93 00:24:29.387 00:24:29.387 03:12:24 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:29.387 03:12:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:29.387 03:12:24 -- nvmf/common.sh@116 -- # sync 00:24:29.387 03:12:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:29.387 03:12:24 -- nvmf/common.sh@119 -- # set +e 00:24:29.387 03:12:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:29.387 03:12:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:29.387 rmmod nvme_tcp 00:24:29.387 rmmod nvme_fabrics 00:24:29.387 rmmod nvme_keyring 00:24:29.387 03:12:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:29.387 03:12:24 -- nvmf/common.sh@123 -- # set -e 00:24:29.387 03:12:24 -- nvmf/common.sh@124 -- # return 0 00:24:29.387 03:12:24 -- nvmf/common.sh@477 -- # '[' -n 2071436 ']' 00:24:29.387 03:12:24 -- nvmf/common.sh@478 -- # killprocess 2071436 00:24:29.387 03:12:24 -- common/autotest_common.sh@926 -- # '[' -z 2071436 ']' 00:24:29.387 03:12:24 -- common/autotest_common.sh@930 -- # kill -0 2071436 00:24:29.387 03:12:24 -- common/autotest_common.sh@931 -- # uname 00:24:29.387 03:12:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:29.387 03:12:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2071436 00:24:29.387 03:12:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:29.387 03:12:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:29.387 03:12:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2071436' 00:24:29.387 killing process with pid 2071436 00:24:29.387 03:12:24 -- common/autotest_common.sh@945 -- # kill 2071436 00:24:29.387 03:12:24 -- common/autotest_common.sh@950 -- # wait 2071436 00:24:29.387 03:12:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:29.387 03:12:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:29.387 03:12:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:29.387 03:12:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:29.387 03:12:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:29.387 03:12:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:29.387 03:12:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:29.387 03:12:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.678 03:12:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:32.678 03:12:27 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:32.678 00:24:32.678 real 0m44.508s 00:24:32.678 user 2m29.578s 00:24:32.678 sys 0m13.189s 00:24:32.678 03:12:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:32.678 03:12:27 -- common/autotest_common.sh@10 -- # set +x 00:24:32.678 ************************************ 00:24:32.678 END TEST nvmf_perf_adq 00:24:32.678 ************************************ 00:24:32.678 03:12:27 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:32.678 03:12:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:32.678 03:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:32.678 03:12:27 -- common/autotest_common.sh@10 -- # set +x 00:24:32.678 ************************************ 00:24:32.678 START TEST nvmf_shutdown 00:24:32.678 ************************************ 00:24:32.678 03:12:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:32.678 * Looking for test storage... 00:24:32.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:32.678 03:12:27 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:32.678 03:12:27 -- nvmf/common.sh@7 -- # uname -s 00:24:32.678 03:12:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.678 03:12:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.678 03:12:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.678 03:12:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.678 03:12:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.678 03:12:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.678 03:12:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.678 03:12:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.678 03:12:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.678 03:12:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.678 03:12:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.678 03:12:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:32.678 03:12:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.678 03:12:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.678 03:12:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.678 03:12:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:32.678 03:12:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.678 03:12:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.678 03:12:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.678 03:12:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.678 03:12:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.678 03:12:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.678 03:12:27 -- paths/export.sh@5 -- # export PATH 00:24:32.678 03:12:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.678 03:12:27 -- nvmf/common.sh@46 -- # : 0 00:24:32.678 03:12:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:32.678 03:12:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:32.678 03:12:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:32.678 03:12:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.678 03:12:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.678 03:12:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:32.678 03:12:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:32.678 03:12:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:32.678 03:12:27 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:32.678 03:12:27 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:32.678 03:12:27 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:32.678 03:12:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:32.678 03:12:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:32.678 03:12:27 -- common/autotest_common.sh@10 -- # set +x 00:24:32.678 ************************************ 00:24:32.678 START TEST nvmf_shutdown_tc1 00:24:32.678 ************************************ 00:24:32.678 03:12:27 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:32.678 03:12:27 -- target/shutdown.sh@74 -- # starttarget 00:24:32.678 03:12:27 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:32.678 03:12:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:32.678 03:12:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.678 03:12:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:32.678 03:12:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:32.678 03:12:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:32.678 03:12:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.678 03:12:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.678 03:12:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.678 03:12:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:32.678 03:12:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:32.678 03:12:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:32.678 03:12:27 -- common/autotest_common.sh@10 -- # set +x 00:24:34.581 03:12:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:34.581 03:12:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:34.581 03:12:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:34.581 03:12:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:34.581 03:12:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:34.581 03:12:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:34.581 03:12:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:34.581 03:12:29 -- nvmf/common.sh@294 -- # net_devs=() 00:24:34.581 03:12:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:34.581 03:12:29 -- nvmf/common.sh@295 -- # e810=() 00:24:34.581 03:12:29 -- nvmf/common.sh@295 -- # local -ga e810 00:24:34.581 03:12:29 -- nvmf/common.sh@296 -- # x722=() 00:24:34.581 03:12:29 -- nvmf/common.sh@296 -- # local -ga x722 00:24:34.581 03:12:29 -- nvmf/common.sh@297 -- # mlx=() 00:24:34.581 03:12:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:34.581 03:12:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.581 03:12:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.581 03:12:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:34.581 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:34.581 03:12:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:34.581 03:12:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:34.581 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:34.581 03:12:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.581 03:12:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.581 03:12:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.581 03:12:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:34.581 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:34.581 03:12:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:34.581 03:12:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.581 03:12:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.581 03:12:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:34.581 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:34.581 03:12:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:34.581 03:12:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:34.581 03:12:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.581 03:12:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.581 03:12:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:34.581 03:12:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.581 03:12:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.581 03:12:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:34.581 03:12:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.581 03:12:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.581 03:12:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:34.581 03:12:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:34.581 03:12:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.581 03:12:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.581 03:12:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.581 03:12:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.581 03:12:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:34.581 03:12:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.581 03:12:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.581 03:12:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.581 03:12:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:34.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:24:34.581 00:24:34.581 --- 10.0.0.2 ping statistics --- 00:24:34.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.581 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:24:34.581 03:12:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:34.581 00:24:34.581 --- 10.0.0.1 ping statistics --- 00:24:34.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.581 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:34.581 03:12:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.581 03:12:29 -- nvmf/common.sh@410 -- # return 0 00:24:34.581 03:12:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:34.581 03:12:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.581 03:12:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:34.581 03:12:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.581 03:12:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:34.581 03:12:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:34.581 03:12:29 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:34.581 03:12:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:34.581 03:12:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:34.581 03:12:29 -- common/autotest_common.sh@10 -- # set +x 00:24:34.581 03:12:29 -- nvmf/common.sh@469 -- # nvmfpid=2074807 00:24:34.581 03:12:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:34.581 03:12:29 -- nvmf/common.sh@470 -- # waitforlisten 2074807 00:24:34.581 03:12:29 -- common/autotest_common.sh@819 -- # '[' -z 2074807 ']' 00:24:34.581 03:12:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.581 03:12:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:34.581 03:12:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.581 03:12:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:34.581 03:12:29 -- common/autotest_common.sh@10 -- # set +x 00:24:34.581 [2024-07-14 03:12:29.647690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:34.581 [2024-07-14 03:12:29.647767] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.581 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.581 [2024-07-14 03:12:29.711338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:34.581 [2024-07-14 03:12:29.799551] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:34.581 [2024-07-14 03:12:29.799697] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.581 [2024-07-14 03:12:29.799714] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.581 [2024-07-14 03:12:29.799725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.581 [2024-07-14 03:12:29.799812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:34.581 [2024-07-14 03:12:29.799884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:34.581 [2024-07-14 03:12:29.800006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:34.581 [2024-07-14 03:12:29.800009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.516 03:12:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:35.516 03:12:30 -- common/autotest_common.sh@852 -- # return 0 00:24:35.516 03:12:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:35.516 03:12:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:35.516 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:24:35.516 03:12:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.516 03:12:30 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.516 03:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.516 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:24:35.516 [2024-07-14 03:12:30.603479] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.516 03:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:35.516 03:12:30 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:35.516 03:12:30 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:35.516 03:12:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:35.516 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:24:35.516 03:12:30 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:35.516 03:12:30 -- target/shutdown.sh@28 -- # cat 00:24:35.516 03:12:30 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:35.516 03:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:35.516 03:12:30 -- common/autotest_common.sh@10 -- # set +x 00:24:35.516 Malloc1 00:24:35.516 [2024-07-14 03:12:30.678705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.516 Malloc2 00:24:35.516 Malloc3 00:24:35.773 Malloc4 00:24:35.773 Malloc5 00:24:35.773 Malloc6 00:24:35.773 Malloc7 00:24:35.773 Malloc8 00:24:36.030 Malloc9 00:24:36.030 Malloc10 00:24:36.030 03:12:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:36.030 03:12:31 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:36.030 03:12:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:36.030 03:12:31 -- common/autotest_common.sh@10 -- # set +x 00:24:36.030 03:12:31 -- target/shutdown.sh@78 -- # perfpid=2074997 00:24:36.030 03:12:31 -- target/shutdown.sh@79 -- # waitforlisten 2074997 /var/tmp/bdevperf.sock 00:24:36.030 03:12:31 -- common/autotest_common.sh@819 -- # '[' -z 2074997 ']' 00:24:36.030 03:12:31 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:36.030 03:12:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:36.030 03:12:31 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:36.030 03:12:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:36.030 03:12:31 -- nvmf/common.sh@520 -- # config=() 00:24:36.030 03:12:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:36.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:36.030 03:12:31 -- nvmf/common.sh@520 -- # local subsystem config 00:24:36.030 03:12:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:36.030 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.030 03:12:31 -- common/autotest_common.sh@10 -- # set +x 00:24:36.030 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.030 { 00:24:36.030 "params": { 00:24:36.030 "name": "Nvme$subsystem", 00:24:36.030 "trtype": "$TEST_TRANSPORT", 00:24:36.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.030 "adrfam": "ipv4", 00:24:36.030 "trsvcid": "$NVMF_PORT", 00:24:36.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.030 "hdgst": ${hdgst:-false}, 00:24:36.030 "ddgst": ${ddgst:-false} 00:24:36.030 }, 00:24:36.030 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:36.031 { 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme$subsystem", 00:24:36.031 "trtype": "$TEST_TRANSPORT", 00:24:36.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "$NVMF_PORT", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:36.031 "hdgst": ${hdgst:-false}, 00:24:36.031 "ddgst": ${ddgst:-false} 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 } 00:24:36.031 EOF 00:24:36.031 )") 00:24:36.031 03:12:31 -- nvmf/common.sh@542 -- # cat 00:24:36.031 03:12:31 -- nvmf/common.sh@544 -- # jq . 00:24:36.031 03:12:31 -- nvmf/common.sh@545 -- # IFS=, 00:24:36.031 03:12:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme1", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme2", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme3", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme4", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme5", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme6", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme7", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme8", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme9", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 },{ 00:24:36.031 "params": { 00:24:36.031 "name": "Nvme10", 00:24:36.031 "trtype": "tcp", 00:24:36.031 "traddr": "10.0.0.2", 00:24:36.031 "adrfam": "ipv4", 00:24:36.031 "trsvcid": "4420", 00:24:36.031 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:36.031 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:36.031 "hdgst": false, 00:24:36.031 "ddgst": false 00:24:36.031 }, 00:24:36.031 "method": "bdev_nvme_attach_controller" 00:24:36.031 }' 00:24:36.031 [2024-07-14 03:12:31.197758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:36.031 [2024-07-14 03:12:31.197832] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:36.031 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.031 [2024-07-14 03:12:31.263956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.289 [2024-07-14 03:12:31.349583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.660 03:12:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:37.660 03:12:32 -- common/autotest_common.sh@852 -- # return 0 00:24:37.660 03:12:32 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:37.660 03:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:37.660 03:12:32 -- common/autotest_common.sh@10 -- # set +x 00:24:37.660 03:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:37.660 03:12:32 -- target/shutdown.sh@83 -- # kill -9 2074997 00:24:37.660 03:12:32 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:37.660 03:12:32 -- target/shutdown.sh@87 -- # sleep 1 00:24:39.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2074997 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:39.029 03:12:33 -- target/shutdown.sh@88 -- # kill -0 2074807 00:24:39.029 03:12:33 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:39.029 03:12:33 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:39.029 03:12:33 -- nvmf/common.sh@520 -- # config=() 00:24:39.029 03:12:33 -- nvmf/common.sh@520 -- # local subsystem config 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.029 "hdgst": ${hdgst:-false}, 00:24:39.029 "ddgst": ${ddgst:-false} 00:24:39.029 }, 00:24:39.029 "method": "bdev_nvme_attach_controller" 00:24:39.029 } 00:24:39.029 EOF 00:24:39.029 )") 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.029 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.029 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.029 { 00:24:39.029 "params": { 00:24:39.029 "name": "Nvme$subsystem", 00:24:39.029 "trtype": "$TEST_TRANSPORT", 00:24:39.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.029 "adrfam": "ipv4", 00:24:39.029 "trsvcid": "$NVMF_PORT", 00:24:39.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.030 "hdgst": ${hdgst:-false}, 00:24:39.030 "ddgst": ${ddgst:-false} 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 } 00:24:39.030 EOF 00:24:39.030 )") 00:24:39.030 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.030 03:12:33 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:39.030 03:12:33 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:39.030 { 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme$subsystem", 00:24:39.030 "trtype": "$TEST_TRANSPORT", 00:24:39.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "$NVMF_PORT", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:39.030 "hdgst": ${hdgst:-false}, 00:24:39.030 "ddgst": ${ddgst:-false} 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 } 00:24:39.030 EOF 00:24:39.030 )") 00:24:39.030 03:12:33 -- nvmf/common.sh@542 -- # cat 00:24:39.030 03:12:33 -- nvmf/common.sh@544 -- # jq . 00:24:39.030 03:12:33 -- nvmf/common.sh@545 -- # IFS=, 00:24:39.030 03:12:33 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme1", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme2", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme3", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme4", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme5", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme6", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme7", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme8", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme9", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 },{ 00:24:39.030 "params": { 00:24:39.030 "name": "Nvme10", 00:24:39.030 "trtype": "tcp", 00:24:39.030 "traddr": "10.0.0.2", 00:24:39.030 "adrfam": "ipv4", 00:24:39.030 "trsvcid": "4420", 00:24:39.030 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:39.030 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:39.030 "hdgst": false, 00:24:39.030 "ddgst": false 00:24:39.030 }, 00:24:39.030 "method": "bdev_nvme_attach_controller" 00:24:39.030 }' 00:24:39.030 [2024-07-14 03:12:33.902575] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:39.030 [2024-07-14 03:12:33.902652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2075427 ] 00:24:39.030 EAL: No free 2048 kB hugepages reported on node 1 00:24:39.030 [2024-07-14 03:12:33.967798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.030 [2024-07-14 03:12:34.052683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.401 Running I/O for 1 seconds... 00:24:41.770 00:24:41.770 Latency(us) 00:24:41.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.770 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme1n1 : 1.09 401.44 25.09 0.00 0.00 156260.63 23495.87 139033.41 00:24:41.770 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme2n1 : 1.10 394.70 24.67 0.00 0.00 158237.51 18350.08 128936.01 00:24:41.770 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme3n1 : 1.08 369.13 23.07 0.00 0.00 166588.32 11359.57 154567.87 00:24:41.770 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme4n1 : 1.09 399.32 24.96 0.00 0.00 154465.97 15146.10 118061.89 00:24:41.770 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme5n1 : 1.11 392.36 24.52 0.00 0.00 156097.04 16699.54 127382.57 00:24:41.770 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme6n1 : 1.11 390.14 24.38 0.00 0.00 156150.73 14272.28 125052.40 00:24:41.770 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme7n1 : 1.11 390.84 24.43 0.00 0.00 154541.81 15825.73 130489.46 00:24:41.770 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme8n1 : 1.16 382.63 23.91 0.00 0.00 151957.85 13301.38 124275.67 00:24:41.770 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme9n1 : 1.12 432.37 27.02 0.00 0.00 138400.51 12427.57 113401.55 00:24:41.770 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:41.770 Verification LBA range: start 0x0 length 0x400 00:24:41.770 Nvme10n1 : 1.16 383.11 23.94 0.00 0.00 150227.24 7330.32 146800.64 00:24:41.770 =================================================================================================================== 00:24:41.770 Total : 3936.03 246.00 0.00 0.00 154008.45 7330.32 154567.87 00:24:41.770 03:12:37 -- target/shutdown.sh@93 -- # stoptarget 00:24:41.770 03:12:37 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:41.770 03:12:37 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:41.770 03:12:37 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:41.770 03:12:37 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:41.770 03:12:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:41.770 03:12:37 -- nvmf/common.sh@116 -- # sync 00:24:41.770 03:12:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:41.770 03:12:37 -- nvmf/common.sh@119 -- # set +e 00:24:41.770 03:12:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:41.770 03:12:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:41.770 rmmod nvme_tcp 00:24:42.027 rmmod nvme_fabrics 00:24:42.027 rmmod nvme_keyring 00:24:42.027 03:12:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:42.027 03:12:37 -- nvmf/common.sh@123 -- # set -e 00:24:42.027 03:12:37 -- nvmf/common.sh@124 -- # return 0 00:24:42.027 03:12:37 -- nvmf/common.sh@477 -- # '[' -n 2074807 ']' 00:24:42.027 03:12:37 -- nvmf/common.sh@478 -- # killprocess 2074807 00:24:42.027 03:12:37 -- common/autotest_common.sh@926 -- # '[' -z 2074807 ']' 00:24:42.027 03:12:37 -- common/autotest_common.sh@930 -- # kill -0 2074807 00:24:42.027 03:12:37 -- common/autotest_common.sh@931 -- # uname 00:24:42.027 03:12:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:42.027 03:12:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2074807 00:24:42.027 03:12:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:42.027 03:12:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:42.027 03:12:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2074807' 00:24:42.027 killing process with pid 2074807 00:24:42.027 03:12:37 -- common/autotest_common.sh@945 -- # kill 2074807 00:24:42.027 03:12:37 -- common/autotest_common.sh@950 -- # wait 2074807 00:24:42.590 03:12:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:42.590 03:12:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:42.590 03:12:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:42.590 03:12:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.590 03:12:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:42.590 03:12:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.590 03:12:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.590 03:12:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.523 03:12:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:44.523 00:24:44.523 real 0m12.055s 00:24:44.523 user 0m35.644s 00:24:44.523 sys 0m3.234s 00:24:44.523 03:12:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.523 03:12:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 ************************************ 00:24:44.523 END TEST nvmf_shutdown_tc1 00:24:44.523 ************************************ 00:24:44.523 03:12:39 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:44.523 03:12:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:44.523 03:12:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:44.523 03:12:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 ************************************ 00:24:44.523 START TEST nvmf_shutdown_tc2 00:24:44.523 ************************************ 00:24:44.523 03:12:39 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:24:44.523 03:12:39 -- target/shutdown.sh@98 -- # starttarget 00:24:44.523 03:12:39 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:44.523 03:12:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:44.523 03:12:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.523 03:12:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:44.523 03:12:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:44.523 03:12:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:44.523 03:12:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.523 03:12:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.523 03:12:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.523 03:12:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:44.523 03:12:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:44.523 03:12:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.523 03:12:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:44.523 03:12:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:44.523 03:12:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:44.523 03:12:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:44.523 03:12:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:44.523 03:12:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:44.523 03:12:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:44.523 03:12:39 -- nvmf/common.sh@294 -- # net_devs=() 00:24:44.523 03:12:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:44.523 03:12:39 -- nvmf/common.sh@295 -- # e810=() 00:24:44.523 03:12:39 -- nvmf/common.sh@295 -- # local -ga e810 00:24:44.523 03:12:39 -- nvmf/common.sh@296 -- # x722=() 00:24:44.523 03:12:39 -- nvmf/common.sh@296 -- # local -ga x722 00:24:44.523 03:12:39 -- nvmf/common.sh@297 -- # mlx=() 00:24:44.523 03:12:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:44.523 03:12:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:44.523 03:12:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:44.523 03:12:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:44.523 03:12:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:44.523 03:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.523 03:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:44.523 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:44.523 03:12:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:44.523 03:12:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:44.523 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:44.523 03:12:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:44.523 03:12:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:44.524 03:12:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.524 03:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.524 03:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.524 03:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.524 03:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:44.524 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:44.524 03:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.524 03:12:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:44.524 03:12:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:44.524 03:12:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:44.524 03:12:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:44.524 03:12:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:44.524 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:44.524 03:12:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:44.524 03:12:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:44.524 03:12:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:44.524 03:12:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:44.524 03:12:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:44.524 03:12:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:44.524 03:12:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:44.524 03:12:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:44.524 03:12:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:44.524 03:12:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:44.524 03:12:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:44.524 03:12:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:44.524 03:12:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:44.524 03:12:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:44.524 03:12:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:44.524 03:12:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:44.524 03:12:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:44.524 03:12:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:44.524 03:12:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:44.524 03:12:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:44.524 03:12:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:44.524 03:12:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:44.524 03:12:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:44.782 03:12:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:44.782 03:12:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:44.782 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:44.782 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:24:44.782 00:24:44.782 --- 10.0.0.2 ping statistics --- 00:24:44.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.782 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:44.782 03:12:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:44.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:44.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:24:44.782 00:24:44.782 --- 10.0.0.1 ping statistics --- 00:24:44.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:44.782 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:24:44.782 03:12:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:44.782 03:12:39 -- nvmf/common.sh@410 -- # return 0 00:24:44.782 03:12:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:44.782 03:12:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:44.782 03:12:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:44.782 03:12:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:44.782 03:12:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:44.782 03:12:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:44.782 03:12:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:44.782 03:12:39 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:44.782 03:12:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:44.782 03:12:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:44.782 03:12:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.782 03:12:39 -- nvmf/common.sh@469 -- # nvmfpid=2076218 00:24:44.782 03:12:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:44.782 03:12:39 -- nvmf/common.sh@470 -- # waitforlisten 2076218 00:24:44.782 03:12:39 -- common/autotest_common.sh@819 -- # '[' -z 2076218 ']' 00:24:44.782 03:12:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:44.782 03:12:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:44.782 03:12:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:44.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:44.782 03:12:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:44.782 03:12:39 -- common/autotest_common.sh@10 -- # set +x 00:24:44.782 [2024-07-14 03:12:39.848346] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:44.782 [2024-07-14 03:12:39.848413] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:44.782 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.782 [2024-07-14 03:12:39.916890] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:44.782 [2024-07-14 03:12:40.013031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.782 [2024-07-14 03:12:40.013188] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.782 [2024-07-14 03:12:40.013207] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.782 [2024-07-14 03:12:40.013220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.782 [2024-07-14 03:12:40.013286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.782 [2024-07-14 03:12:40.013317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.782 [2024-07-14 03:12:40.013385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:44.782 [2024-07-14 03:12:40.013387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.715 03:12:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.715 03:12:40 -- common/autotest_common.sh@852 -- # return 0 00:24:45.715 03:12:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:45.715 03:12:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.715 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:24:45.715 03:12:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.715 03:12:40 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:45.715 03:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:45.715 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:24:45.715 [2024-07-14 03:12:40.835508] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.715 03:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:45.715 03:12:40 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:45.715 03:12:40 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:45.715 03:12:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:45.715 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:24:45.715 03:12:40 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:45.715 03:12:40 -- target/shutdown.sh@28 -- # cat 00:24:45.715 03:12:40 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:45.715 03:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:45.715 03:12:40 -- common/autotest_common.sh@10 -- # set +x 00:24:45.715 Malloc1 00:24:45.715 [2024-07-14 03:12:40.910795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:45.715 Malloc2 00:24:45.974 Malloc3 00:24:45.974 Malloc4 00:24:45.974 Malloc5 00:24:45.974 Malloc6 00:24:45.974 Malloc7 00:24:46.232 Malloc8 00:24:46.232 Malloc9 00:24:46.232 Malloc10 00:24:46.232 03:12:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:46.232 03:12:41 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:46.232 03:12:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:46.232 03:12:41 -- common/autotest_common.sh@10 -- # set +x 00:24:46.232 03:12:41 -- target/shutdown.sh@102 -- # perfpid=2076413 00:24:46.232 03:12:41 -- target/shutdown.sh@103 -- # waitforlisten 2076413 /var/tmp/bdevperf.sock 00:24:46.232 03:12:41 -- common/autotest_common.sh@819 -- # '[' -z 2076413 ']' 00:24:46.232 03:12:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.232 03:12:41 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:46.232 03:12:41 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:46.232 03:12:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:46.232 03:12:41 -- nvmf/common.sh@520 -- # config=() 00:24:46.232 03:12:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.232 03:12:41 -- nvmf/common.sh@520 -- # local subsystem config 00:24:46.232 03:12:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:46.232 03:12:41 -- common/autotest_common.sh@10 -- # set +x 00:24:46.232 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.232 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.232 { 00:24:46.232 "params": { 00:24:46.232 "name": "Nvme$subsystem", 00:24:46.232 "trtype": "$TEST_TRANSPORT", 00:24:46.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.232 "adrfam": "ipv4", 00:24:46.232 "trsvcid": "$NVMF_PORT", 00:24:46.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.232 "hdgst": ${hdgst:-false}, 00:24:46.232 "ddgst": ${ddgst:-false} 00:24:46.232 }, 00:24:46.232 "method": "bdev_nvme_attach_controller" 00:24:46.232 } 00:24:46.232 EOF 00:24:46.232 )") 00:24:46.232 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.232 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.232 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.232 { 00:24:46.232 "params": { 00:24:46.232 "name": "Nvme$subsystem", 00:24:46.232 "trtype": "$TEST_TRANSPORT", 00:24:46.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.232 "adrfam": "ipv4", 00:24:46.232 "trsvcid": "$NVMF_PORT", 00:24:46.232 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.232 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.232 "hdgst": ${hdgst:-false}, 00:24:46.232 "ddgst": ${ddgst:-false} 00:24:46.232 }, 00:24:46.232 "method": "bdev_nvme_attach_controller" 00:24:46.232 } 00:24:46.232 EOF 00:24:46.232 )") 00:24:46.232 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.232 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.232 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.232 { 00:24:46.232 "params": { 00:24:46.232 "name": "Nvme$subsystem", 00:24:46.232 "trtype": "$TEST_TRANSPORT", 00:24:46.232 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:46.233 { 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme$subsystem", 00:24:46.233 "trtype": "$TEST_TRANSPORT", 00:24:46.233 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "$NVMF_PORT", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:46.233 "hdgst": ${hdgst:-false}, 00:24:46.233 "ddgst": ${ddgst:-false} 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 } 00:24:46.233 EOF 00:24:46.233 )") 00:24:46.233 03:12:41 -- nvmf/common.sh@542 -- # cat 00:24:46.233 03:12:41 -- nvmf/common.sh@544 -- # jq . 00:24:46.233 03:12:41 -- nvmf/common.sh@545 -- # IFS=, 00:24:46.233 03:12:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme1", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme2", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme3", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme4", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme5", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme6", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme7", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme8", 00:24:46.233 "trtype": "tcp", 00:24:46.233 "traddr": "10.0.0.2", 00:24:46.233 "adrfam": "ipv4", 00:24:46.233 "trsvcid": "4420", 00:24:46.233 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:46.233 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:46.233 "hdgst": false, 00:24:46.233 "ddgst": false 00:24:46.233 }, 00:24:46.233 "method": "bdev_nvme_attach_controller" 00:24:46.233 },{ 00:24:46.233 "params": { 00:24:46.233 "name": "Nvme9", 00:24:46.233 "trtype": "tcp", 00:24:46.234 "traddr": "10.0.0.2", 00:24:46.234 "adrfam": "ipv4", 00:24:46.234 "trsvcid": "4420", 00:24:46.234 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:46.234 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:46.234 "hdgst": false, 00:24:46.234 "ddgst": false 00:24:46.234 }, 00:24:46.234 "method": "bdev_nvme_attach_controller" 00:24:46.234 },{ 00:24:46.234 "params": { 00:24:46.234 "name": "Nvme10", 00:24:46.234 "trtype": "tcp", 00:24:46.234 "traddr": "10.0.0.2", 00:24:46.234 "adrfam": "ipv4", 00:24:46.234 "trsvcid": "4420", 00:24:46.234 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:46.234 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:46.234 "hdgst": false, 00:24:46.234 "ddgst": false 00:24:46.234 }, 00:24:46.234 "method": "bdev_nvme_attach_controller" 00:24:46.234 }' 00:24:46.234 [2024-07-14 03:12:41.434279] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:46.234 [2024-07-14 03:12:41.434366] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2076413 ] 00:24:46.234 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.492 [2024-07-14 03:12:41.497307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:46.492 [2024-07-14 03:12:41.581832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:48.391 Running I/O for 10 seconds... 00:24:48.649 03:12:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.649 03:12:43 -- common/autotest_common.sh@852 -- # return 0 00:24:48.649 03:12:43 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:48.649 03:12:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.649 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:24:48.649 03:12:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.649 03:12:43 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:48.649 03:12:43 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:48.649 03:12:43 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:48.649 03:12:43 -- target/shutdown.sh@57 -- # local ret=1 00:24:48.649 03:12:43 -- target/shutdown.sh@58 -- # local i 00:24:48.649 03:12:43 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:48.649 03:12:43 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:48.649 03:12:43 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:48.649 03:12:43 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:48.649 03:12:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:48.649 03:12:43 -- common/autotest_common.sh@10 -- # set +x 00:24:48.649 03:12:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:48.649 03:12:43 -- target/shutdown.sh@60 -- # read_io_count=167 00:24:48.649 03:12:43 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:24:48.649 03:12:43 -- target/shutdown.sh@64 -- # ret=0 00:24:48.649 03:12:43 -- target/shutdown.sh@65 -- # break 00:24:48.649 03:12:43 -- target/shutdown.sh@69 -- # return 0 00:24:48.649 03:12:43 -- target/shutdown.sh@109 -- # killprocess 2076413 00:24:48.649 03:12:43 -- common/autotest_common.sh@926 -- # '[' -z 2076413 ']' 00:24:48.649 03:12:43 -- common/autotest_common.sh@930 -- # kill -0 2076413 00:24:48.649 03:12:43 -- common/autotest_common.sh@931 -- # uname 00:24:48.649 03:12:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:48.649 03:12:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2076413 00:24:48.649 03:12:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:48.649 03:12:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:48.649 03:12:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2076413' 00:24:48.649 killing process with pid 2076413 00:24:48.649 03:12:43 -- common/autotest_common.sh@945 -- # kill 2076413 00:24:48.649 03:12:43 -- common/autotest_common.sh@950 -- # wait 2076413 00:24:48.908 Received shutdown signal, test time was about 0.709741 seconds 00:24:48.908 00:24:48.908 Latency(us) 00:24:48.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.908 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme1n1 : 0.71 385.40 24.09 0.00 0.00 160431.49 27962.03 155344.59 00:24:48.908 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme2n1 : 0.69 394.01 24.63 0.00 0.00 155092.94 25826.04 121945.51 00:24:48.908 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme3n1 : 0.71 384.41 24.03 0.00 0.00 157462.38 26214.40 144470.47 00:24:48.908 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme4n1 : 0.69 392.39 24.52 0.00 0.00 152052.44 27185.30 121945.51 00:24:48.908 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme5n1 : 0.69 391.62 24.48 0.00 0.00 150698.18 27962.03 121945.51 00:24:48.908 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme6n1 : 0.70 388.77 24.30 0.00 0.00 149960.72 28350.39 130489.46 00:24:48.908 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme7n1 : 0.70 386.91 24.18 0.00 0.00 149018.37 29127.11 131266.18 00:24:48.908 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme8n1 : 0.70 389.80 24.36 0.00 0.00 145957.88 29709.65 118838.61 00:24:48.908 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme9n1 : 0.69 340.90 21.31 0.00 0.00 164792.65 13204.29 135149.80 00:24:48.908 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:48.908 Verification LBA range: start 0x0 length 0x400 00:24:48.908 Nvme10n1 : 0.71 390.73 24.42 0.00 0.00 143505.18 13204.29 128159.29 00:24:48.908 =================================================================================================================== 00:24:48.908 Total : 3844.93 240.31 0.00 0.00 152711.52 13204.29 155344.59 00:24:49.166 03:12:44 -- target/shutdown.sh@112 -- # sleep 1 00:24:50.099 03:12:45 -- target/shutdown.sh@113 -- # kill -0 2076218 00:24:50.099 03:12:45 -- target/shutdown.sh@115 -- # stoptarget 00:24:50.099 03:12:45 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:50.099 03:12:45 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:50.099 03:12:45 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:50.099 03:12:45 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:50.099 03:12:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:50.099 03:12:45 -- nvmf/common.sh@116 -- # sync 00:24:50.099 03:12:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:50.099 03:12:45 -- nvmf/common.sh@119 -- # set +e 00:24:50.099 03:12:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:50.099 03:12:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:50.099 rmmod nvme_tcp 00:24:50.099 rmmod nvme_fabrics 00:24:50.099 rmmod nvme_keyring 00:24:50.099 03:12:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:50.099 03:12:45 -- nvmf/common.sh@123 -- # set -e 00:24:50.099 03:12:45 -- nvmf/common.sh@124 -- # return 0 00:24:50.099 03:12:45 -- nvmf/common.sh@477 -- # '[' -n 2076218 ']' 00:24:50.099 03:12:45 -- nvmf/common.sh@478 -- # killprocess 2076218 00:24:50.099 03:12:45 -- common/autotest_common.sh@926 -- # '[' -z 2076218 ']' 00:24:50.099 03:12:45 -- common/autotest_common.sh@930 -- # kill -0 2076218 00:24:50.099 03:12:45 -- common/autotest_common.sh@931 -- # uname 00:24:50.099 03:12:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.099 03:12:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2076218 00:24:50.099 03:12:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:50.099 03:12:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:50.099 03:12:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2076218' 00:24:50.099 killing process with pid 2076218 00:24:50.099 03:12:45 -- common/autotest_common.sh@945 -- # kill 2076218 00:24:50.099 03:12:45 -- common/autotest_common.sh@950 -- # wait 2076218 00:24:50.665 03:12:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:50.665 03:12:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:50.665 03:12:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:50.665 03:12:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.665 03:12:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:50.665 03:12:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.665 03:12:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.665 03:12:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.569 03:12:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:52.569 00:24:52.569 real 0m8.171s 00:24:52.569 user 0m25.664s 00:24:52.569 sys 0m1.553s 00:24:52.569 03:12:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:52.569 03:12:47 -- common/autotest_common.sh@10 -- # set +x 00:24:52.569 ************************************ 00:24:52.569 END TEST nvmf_shutdown_tc2 00:24:52.569 ************************************ 00:24:52.569 03:12:47 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:52.569 03:12:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:52.569 03:12:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:52.569 03:12:47 -- common/autotest_common.sh@10 -- # set +x 00:24:52.569 ************************************ 00:24:52.569 START TEST nvmf_shutdown_tc3 00:24:52.569 ************************************ 00:24:52.569 03:12:47 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:24:52.569 03:12:47 -- target/shutdown.sh@120 -- # starttarget 00:24:52.569 03:12:47 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:52.569 03:12:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:52.569 03:12:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:52.569 03:12:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:52.569 03:12:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:52.569 03:12:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:52.569 03:12:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.569 03:12:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.569 03:12:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:52.828 03:12:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:52.828 03:12:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:52.828 03:12:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:52.828 03:12:47 -- common/autotest_common.sh@10 -- # set +x 00:24:52.828 03:12:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:52.828 03:12:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:52.828 03:12:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:52.828 03:12:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:52.828 03:12:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:52.828 03:12:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:52.828 03:12:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:52.828 03:12:47 -- nvmf/common.sh@294 -- # net_devs=() 00:24:52.829 03:12:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:52.829 03:12:47 -- nvmf/common.sh@295 -- # e810=() 00:24:52.829 03:12:47 -- nvmf/common.sh@295 -- # local -ga e810 00:24:52.829 03:12:47 -- nvmf/common.sh@296 -- # x722=() 00:24:52.829 03:12:47 -- nvmf/common.sh@296 -- # local -ga x722 00:24:52.829 03:12:47 -- nvmf/common.sh@297 -- # mlx=() 00:24:52.829 03:12:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:52.829 03:12:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:52.829 03:12:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:52.829 03:12:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:52.829 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:52.829 03:12:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:52.829 03:12:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:52.829 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:52.829 03:12:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:52.829 03:12:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.829 03:12:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.829 03:12:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:52.829 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:52.829 03:12:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:52.829 03:12:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:52.829 03:12:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:52.829 03:12:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:52.829 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:52.829 03:12:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:52.829 03:12:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:52.829 03:12:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:52.829 03:12:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:52.829 03:12:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:52.829 03:12:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:52.829 03:12:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:52.829 03:12:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:52.829 03:12:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:52.829 03:12:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:52.829 03:12:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:52.829 03:12:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:52.829 03:12:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:52.829 03:12:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:52.829 03:12:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:52.829 03:12:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:52.829 03:12:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:52.829 03:12:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:52.829 03:12:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:52.829 03:12:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:52.829 03:12:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:52.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:52.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:24:52.829 00:24:52.829 --- 10.0.0.2 ping statistics --- 00:24:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.829 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:24:52.829 03:12:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:52.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:52.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:24:52.829 00:24:52.829 --- 10.0.0.1 ping statistics --- 00:24:52.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:52.829 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:24:52.829 03:12:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:52.829 03:12:47 -- nvmf/common.sh@410 -- # return 0 00:24:52.829 03:12:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:52.829 03:12:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:52.829 03:12:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:52.829 03:12:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:52.829 03:12:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:52.829 03:12:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:52.829 03:12:47 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:52.829 03:12:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:52.829 03:12:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:52.829 03:12:47 -- common/autotest_common.sh@10 -- # set +x 00:24:52.829 03:12:47 -- nvmf/common.sh@469 -- # nvmfpid=2077342 00:24:52.829 03:12:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:52.829 03:12:47 -- nvmf/common.sh@470 -- # waitforlisten 2077342 00:24:52.829 03:12:47 -- common/autotest_common.sh@819 -- # '[' -z 2077342 ']' 00:24:52.829 03:12:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:52.829 03:12:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:52.829 03:12:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:52.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:52.829 03:12:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:52.829 03:12:47 -- common/autotest_common.sh@10 -- # set +x 00:24:52.829 [2024-07-14 03:12:48.041264] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:52.829 [2024-07-14 03:12:48.041346] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:52.829 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.088 [2024-07-14 03:12:48.112423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:53.088 [2024-07-14 03:12:48.201625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:53.088 [2024-07-14 03:12:48.201796] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.088 [2024-07-14 03:12:48.201817] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.088 [2024-07-14 03:12:48.201833] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.088 [2024-07-14 03:12:48.201952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.088 [2024-07-14 03:12:48.202038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.088 [2024-07-14 03:12:48.202109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.088 [2024-07-14 03:12:48.202107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:54.023 03:12:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:54.023 03:12:48 -- common/autotest_common.sh@852 -- # return 0 00:24:54.023 03:12:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:54.023 03:12:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:54.023 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:24:54.023 03:12:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.023 03:12:48 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.023 03:12:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.023 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:24:54.023 [2024-07-14 03:12:48.978338] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.023 03:12:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.023 03:12:48 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:54.023 03:12:48 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:54.023 03:12:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:54.023 03:12:48 -- common/autotest_common.sh@10 -- # set +x 00:24:54.023 03:12:48 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:54.023 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.023 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.023 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.023 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:48 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:48 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:49 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:49 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:49 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:54.024 03:12:49 -- target/shutdown.sh@28 -- # cat 00:24:54.024 03:12:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:54.024 03:12:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:54.024 03:12:49 -- common/autotest_common.sh@10 -- # set +x 00:24:54.024 Malloc1 00:24:54.024 [2024-07-14 03:12:49.053348] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.024 Malloc2 00:24:54.024 Malloc3 00:24:54.024 Malloc4 00:24:54.024 Malloc5 00:24:54.024 Malloc6 00:24:54.282 Malloc7 00:24:54.282 Malloc8 00:24:54.282 Malloc9 00:24:54.282 Malloc10 00:24:54.282 03:12:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:54.282 03:12:49 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:54.282 03:12:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:54.282 03:12:49 -- common/autotest_common.sh@10 -- # set +x 00:24:54.282 03:12:49 -- target/shutdown.sh@124 -- # perfpid=2077535 00:24:54.282 03:12:49 -- target/shutdown.sh@125 -- # waitforlisten 2077535 /var/tmp/bdevperf.sock 00:24:54.282 03:12:49 -- common/autotest_common.sh@819 -- # '[' -z 2077535 ']' 00:24:54.282 03:12:49 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:54.282 03:12:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:54.282 03:12:49 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:54.282 03:12:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:54.282 03:12:49 -- nvmf/common.sh@520 -- # config=() 00:24:54.282 03:12:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:54.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:54.282 03:12:49 -- nvmf/common.sh@520 -- # local subsystem config 00:24:54.282 03:12:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- common/autotest_common.sh@10 -- # set +x 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.282 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.282 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.282 { 00:24:54.282 "params": { 00:24:54.282 "name": "Nvme$subsystem", 00:24:54.282 "trtype": "$TEST_TRANSPORT", 00:24:54.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.282 "adrfam": "ipv4", 00:24:54.282 "trsvcid": "$NVMF_PORT", 00:24:54.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.282 "hdgst": ${hdgst:-false}, 00:24:54.282 "ddgst": ${ddgst:-false} 00:24:54.282 }, 00:24:54.282 "method": "bdev_nvme_attach_controller" 00:24:54.282 } 00:24:54.282 EOF 00:24:54.282 )") 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.540 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.540 { 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme$subsystem", 00:24:54.540 "trtype": "$TEST_TRANSPORT", 00:24:54.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "$NVMF_PORT", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.540 "hdgst": ${hdgst:-false}, 00:24:54.540 "ddgst": ${ddgst:-false} 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 } 00:24:54.540 EOF 00:24:54.540 )") 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.540 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.540 { 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme$subsystem", 00:24:54.540 "trtype": "$TEST_TRANSPORT", 00:24:54.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "$NVMF_PORT", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.540 "hdgst": ${hdgst:-false}, 00:24:54.540 "ddgst": ${ddgst:-false} 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 } 00:24:54.540 EOF 00:24:54.540 )") 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.540 03:12:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:54.540 { 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme$subsystem", 00:24:54.540 "trtype": "$TEST_TRANSPORT", 00:24:54.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "$NVMF_PORT", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.540 "hdgst": ${hdgst:-false}, 00:24:54.540 "ddgst": ${ddgst:-false} 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 } 00:24:54.540 EOF 00:24:54.540 )") 00:24:54.540 03:12:49 -- nvmf/common.sh@542 -- # cat 00:24:54.540 03:12:49 -- nvmf/common.sh@544 -- # jq . 00:24:54.540 03:12:49 -- nvmf/common.sh@545 -- # IFS=, 00:24:54.540 03:12:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme1", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme2", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme3", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme4", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme5", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme6", 00:24:54.540 "trtype": "tcp", 00:24:54.540 "traddr": "10.0.0.2", 00:24:54.540 "adrfam": "ipv4", 00:24:54.540 "trsvcid": "4420", 00:24:54.540 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:54.540 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:54.540 "hdgst": false, 00:24:54.540 "ddgst": false 00:24:54.540 }, 00:24:54.540 "method": "bdev_nvme_attach_controller" 00:24:54.540 },{ 00:24:54.540 "params": { 00:24:54.540 "name": "Nvme7", 00:24:54.541 "trtype": "tcp", 00:24:54.541 "traddr": "10.0.0.2", 00:24:54.541 "adrfam": "ipv4", 00:24:54.541 "trsvcid": "4420", 00:24:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:54.541 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:54.541 "hdgst": false, 00:24:54.541 "ddgst": false 00:24:54.541 }, 00:24:54.541 "method": "bdev_nvme_attach_controller" 00:24:54.541 },{ 00:24:54.541 "params": { 00:24:54.541 "name": "Nvme8", 00:24:54.541 "trtype": "tcp", 00:24:54.541 "traddr": "10.0.0.2", 00:24:54.541 "adrfam": "ipv4", 00:24:54.541 "trsvcid": "4420", 00:24:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:54.541 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:54.541 "hdgst": false, 00:24:54.541 "ddgst": false 00:24:54.541 }, 00:24:54.541 "method": "bdev_nvme_attach_controller" 00:24:54.541 },{ 00:24:54.541 "params": { 00:24:54.541 "name": "Nvme9", 00:24:54.541 "trtype": "tcp", 00:24:54.541 "traddr": "10.0.0.2", 00:24:54.541 "adrfam": "ipv4", 00:24:54.541 "trsvcid": "4420", 00:24:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:54.541 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:54.541 "hdgst": false, 00:24:54.541 "ddgst": false 00:24:54.541 }, 00:24:54.541 "method": "bdev_nvme_attach_controller" 00:24:54.541 },{ 00:24:54.541 "params": { 00:24:54.541 "name": "Nvme10", 00:24:54.541 "trtype": "tcp", 00:24:54.541 "traddr": "10.0.0.2", 00:24:54.541 "adrfam": "ipv4", 00:24:54.541 "trsvcid": "4420", 00:24:54.541 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:54.541 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:54.541 "hdgst": false, 00:24:54.541 "ddgst": false 00:24:54.541 }, 00:24:54.541 "method": "bdev_nvme_attach_controller" 00:24:54.541 }' 00:24:54.541 [2024-07-14 03:12:49.553395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:24:54.541 [2024-07-14 03:12:49.553471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077535 ] 00:24:54.541 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.541 [2024-07-14 03:12:49.619294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.541 [2024-07-14 03:12:49.704650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.440 Running I/O for 10 seconds... 00:24:56.440 03:12:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.440 03:12:51 -- common/autotest_common.sh@852 -- # return 0 00:24:56.440 03:12:51 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:56.440 03:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.440 03:12:51 -- common/autotest_common.sh@10 -- # set +x 00:24:56.440 03:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.440 03:12:51 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.440 03:12:51 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:56.440 03:12:51 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:56.440 03:12:51 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:56.440 03:12:51 -- target/shutdown.sh@57 -- # local ret=1 00:24:56.440 03:12:51 -- target/shutdown.sh@58 -- # local i 00:24:56.440 03:12:51 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:56.440 03:12:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:56.440 03:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.440 03:12:51 -- common/autotest_common.sh@10 -- # set +x 00:24:56.440 03:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:56.440 03:12:51 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:56.440 03:12:51 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:56.440 03:12:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:56.440 03:12:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:56.440 03:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.440 03:12:51 -- common/autotest_common.sh@10 -- # set +x 00:24:56.440 03:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.440 03:12:51 -- target/shutdown.sh@60 -- # read_io_count=87 00:24:56.440 03:12:51 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:24:56.440 03:12:51 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:56.698 03:12:51 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:56.698 03:12:51 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:56.699 03:12:51 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:56.699 03:12:51 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:56.699 03:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.699 03:12:51 -- common/autotest_common.sh@10 -- # set +x 00:24:56.699 03:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.972 03:12:51 -- target/shutdown.sh@60 -- # read_io_count=211 00:24:56.972 03:12:51 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:24:56.972 03:12:51 -- target/shutdown.sh@64 -- # ret=0 00:24:56.972 03:12:51 -- target/shutdown.sh@65 -- # break 00:24:56.972 03:12:51 -- target/shutdown.sh@69 -- # return 0 00:24:56.972 03:12:51 -- target/shutdown.sh@134 -- # killprocess 2077342 00:24:56.972 03:12:51 -- common/autotest_common.sh@926 -- # '[' -z 2077342 ']' 00:24:56.972 03:12:51 -- common/autotest_common.sh@930 -- # kill -0 2077342 00:24:56.972 03:12:51 -- common/autotest_common.sh@931 -- # uname 00:24:56.972 03:12:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:56.972 03:12:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2077342 00:24:56.972 03:12:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:56.972 03:12:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:56.972 03:12:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2077342' 00:24:56.972 killing process with pid 2077342 00:24:56.972 03:12:52 -- common/autotest_common.sh@945 -- # kill 2077342 00:24:56.972 03:12:52 -- common/autotest_common.sh@950 -- # wait 2077342 00:24:56.972 [2024-07-14 03:12:52.008073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008325] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.972 [2024-07-14 03:12:52.008361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.008932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.009686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.012994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.013148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f230c0 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.014989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.973 [2024-07-14 03:12:52.015262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015342] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015354] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.015571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e32400 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017705] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.017994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.018006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.018018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f23570 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.019689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.019724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.019739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.974 [2024-07-14 03:12:52.019752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.019990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.020537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd51c0 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021791] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-14 03:12:52.021928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:56.975 the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 03:12:52.021959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.975 the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.975 [2024-07-14 03:12:52.021986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.021991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.975 [2024-07-14 03:12:52.021999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.975 [2024-07-14 03:12:52.022013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.975 [2024-07-14 03:12:52.022026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.975 [2024-07-14 03:12:52.022039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.975 [2024-07-14 03:12:52.022052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de530 is same [2024-07-14 03:12:52.022066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with with the state(5) to be set 00:24:56.975 the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.975 [2024-07-14 03:12:52.022122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:24:56.976 id:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-07-14 03:12:52.022213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:56.976 the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-14 03:12:52.022227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:24:56.976 id:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:56.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b4c80 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-14 03:12:52.022324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:56.976 the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:56.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022371] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022447] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281410 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:24:56.976 id:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:56.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022635] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288a30 is same the state(5) to be set 00:24:56.976 with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022688] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with [2024-07-14 03:12:52.022700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:56.976 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.976 [2024-07-14 03:12:52.022754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.976 [2024-07-14 03:12:52.022767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with the state(5) to be set 00:24:56.976 [2024-07-14 03:12:52.022778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-14 03:12:52.022779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5650 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:56.977 the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.022793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.977 [2024-07-14 03:12:52.022807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b2d80 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023725] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.023790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024404] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024590] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024603] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024704] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024779] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024842] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.024990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.025002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.025014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.025026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.025038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5b00 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.977 [2024-07-14 03:12:52.026657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.026987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.027245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bd5fb0 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028927] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028939] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.028996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.978 [2024-07-14 03:12:52.029420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.029624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31630 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.030812] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.030839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.030859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031384] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031757] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.031807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e31f50 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.033932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.979 [2024-07-14 03:12:52.033964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.979 [2024-07-14 03:12:52.033981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.979 [2024-07-14 03:12:52.033996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.979 [2024-07-14 03:12:52.034012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.979 [2024-07-14 03:12:52.034025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.979 [2024-07-14 03:12:52.034039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.979 [2024-07-14 03:12:52.034053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.979 [2024-07-14 03:12:52.034066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420d20 is same with the state(5) to be set 00:24:56.979 [2024-07-14 03:12:52.034116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.979 [2024-07-14 03:12:52.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450d90 is same with the state(5) to be set 00:24:56.980 [2024-07-14 03:12:52.034278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de530 (9): Bad file descriptor 00:24:56.980 [2024-07-14 03:12:52.034344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034467] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24374d0 is same with the state(5) to be set 00:24:56.980 [2024-07-14 03:12:52.034494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b4c80 (9): Bad file descriptor 00:24:56.980 [2024-07-14 03:12:52.034523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2281410 (9): Bad file descriptor 00:24:56.980 [2024-07-14 03:12:52.034573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b110 is same with the state(5) to be set 00:24:56.980 [2024-07-14 03:12:52.034725] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288a30 (9): Bad file descriptor 00:24:56.980 [2024-07-14 03:12:52.034754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b2d80 (9): Bad file descriptor 00:24:56.980 [2024-07-14 03:12:52.034801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:56.980 [2024-07-14 03:12:52.034937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.034950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450960 is same with the state(5) to be set 00:24:56.980 [2024-07-14 03:12:52.035018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.980 [2024-07-14 03:12:52.035754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.980 [2024-07-14 03:12:52.035773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.035975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.035992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.036977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.981 [2024-07-14 03:12:52.036993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.981 [2024-07-14 03:12:52.037008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2356aa0 is same with the state(5) to be set 00:24:56.982 [2024-07-14 03:12:52.037257] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2356aa0 was disconnected and freed. reset controller. 00:24:56.982 [2024-07-14 03:12:52.037932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.037979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.037995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.038977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.038992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.039028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.039060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.039092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.039136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.982 [2024-07-14 03:12:52.039168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.982 [2024-07-14 03:12:52.039185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.039544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.039560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.062764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.062781] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235c490 is same with the state(5) to be set 00:24:56.983 [2024-07-14 03:12:52.062900] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235c490 was disconnected and freed. reset controller. 00:24:56.983 [2024-07-14 03:12:52.063215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420d20 (9): Bad file descriptor 00:24:56.983 [2024-07-14 03:12:52.063255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450d90 (9): Bad file descriptor 00:24:56.983 [2024-07-14 03:12:52.063291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24374d0 (9): Bad file descriptor 00:24:56.983 [2024-07-14 03:12:52.063336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243b110 (9): Bad file descriptor 00:24:56.983 [2024-07-14 03:12:52.063370] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.983 [2024-07-14 03:12:52.063400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450960 (9): Bad file descriptor 00:24:56.983 [2024-07-14 03:12:52.066508] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:56.983 [2024-07-14 03:12:52.066658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.066978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.066995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.067010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.067026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.067042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.983 [2024-07-14 03:12:52.067058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.983 [2024-07-14 03:12:52.067073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.067975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.067996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.984 [2024-07-14 03:12:52.068449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.984 [2024-07-14 03:12:52.068463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.068739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.068754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.069992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.985 [2024-07-14 03:12:52.070873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.985 [2024-07-14 03:12:52.070890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.070907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.070922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.070939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.070969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.070984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.071971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.071987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.072002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.072018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.072037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.073266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.073291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.073313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.073330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.986 [2024-07-14 03:12:52.073347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.986 [2024-07-14 03:12:52.073363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.073993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.987 [2024-07-14 03:12:52.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.987 [2024-07-14 03:12:52.074531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.074970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.074986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.075339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.075355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:45824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.076968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.076984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.988 [2024-07-14 03:12:52.077320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.988 [2024-07-14 03:12:52.077337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:46464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:46720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:47104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.077980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.077996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:49024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.989 [2024-07-14 03:12:52.078654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.989 [2024-07-14 03:12:52.078670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.078685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.078701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.078720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.078736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.078751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.078768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.078784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.078800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.078814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.078934] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x235aeb0 was disconnected and freed. reset controller. 00:24:56.990 [2024-07-14 03:12:52.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.079969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.079986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.990 [2024-07-14 03:12:52.080304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.990 [2024-07-14 03:12:52.080319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.080978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.080993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.081014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.081029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.081046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.081062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.092829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.092931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.092953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.092969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.092986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.991 [2024-07-14 03:12:52.093002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.991 [2024-07-14 03:12:52.093019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235f010 is same with the state(5) to be set 00:24:56.991 [2024-07-14 03:12:52.094711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.991 [2024-07-14 03:12:52.094753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:56.991 [2024-07-14 03:12:52.094861] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.991 [2024-07-14 03:12:52.094912] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.991 [2024-07-14 03:12:52.094950] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.991 [2024-07-14 03:12:52.094972] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.991 [2024-07-14 03:12:52.094994] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.991 [2024-07-14 03:12:52.096930] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:56.991 [2024-07-14 03:12:52.096986] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:56.991 [2024-07-14 03:12:52.097014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:56.991 [2024-07-14 03:12:52.097043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:56.991 [2024-07-14 03:12:52.097063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:56.991 [2024-07-14 03:12:52.097310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.991 [2024-07-14 03:12:52.097480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.991 [2024-07-14 03:12:52.097507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2288a30 with addr=10.0.0.2, port=4420 00:24:56.991 [2024-07-14 03:12:52.097526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288a30 is same with the state(5) to be set 00:24:56.991 [2024-07-14 03:12:52.097665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.991 [2024-07-14 03:12:52.097848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.991 [2024-07-14 03:12:52.097891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b4c80 with addr=10.0.0.2, port=4420 00:24:56.991 [2024-07-14 03:12:52.097908] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b4c80 is same with the state(5) to be set 00:24:56.991 [2024-07-14 03:12:52.098745] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:56.991 [2024-07-14 03:12:52.098819] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:56.992 [2024-07-14 03:12:52.099188] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:56.992 [2024-07-14 03:12:52.099379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.099535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.099561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b2d80 with addr=10.0.0.2, port=4420 00:24:56.992 [2024-07-14 03:12:52.099578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b2d80 is same with the state(5) to be set 00:24:56.992 [2024-07-14 03:12:52.099725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.099880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.099907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2281410 with addr=10.0.0.2, port=4420 00:24:56.992 [2024-07-14 03:12:52.099924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2281410 is same with the state(5) to be set 00:24:56.992 [2024-07-14 03:12:52.100078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.100223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.100249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2450d90 with addr=10.0.0.2, port=4420 00:24:56.992 [2024-07-14 03:12:52.100266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450d90 is same with the state(5) to be set 00:24:56.992 [2024-07-14 03:12:52.100409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.100551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.992 [2024-07-14 03:12:52.100577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22de530 with addr=10.0.0.2, port=4420 00:24:56.992 [2024-07-14 03:12:52.100594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22de530 is same with the state(5) to be set 00:24:56.992 [2024-07-14 03:12:52.100618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288a30 (9): Bad file descriptor 00:24:56.992 [2024-07-14 03:12:52.100640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b4c80 (9): Bad file descriptor 00:24:56.992 [2024-07-14 03:12:52.100771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.100823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.100875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.100909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.100949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.100980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.100996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.992 [2024-07-14 03:12:52.101930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.992 [2024-07-14 03:12:52.101945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.101978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.101994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.102886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.102902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23582f0 is same with the state(5) to be set 00:24:56.993 [2024-07-14 03:12:52.104172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.993 [2024-07-14 03:12:52.104558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.993 [2024-07-14 03:12:52.104574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.104977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.104992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.994 [2024-07-14 03:12:52.105968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.994 [2024-07-14 03:12:52.105984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.105999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.106248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.106266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23598d0 is same with the state(5) to be set 00:24:56.995 [2024-07-14 03:12:52.107769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.107818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.107860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.107908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.107939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.107970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.107986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.995 [2024-07-14 03:12:52.108659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.995 [2024-07-14 03:12:52.108675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.108962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.108977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:56.996 [2024-07-14 03:12:52.109857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:56.996 [2024-07-14 03:12:52.109880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x235da30 is same with the state(5) to be set 00:24:56.996 [2024-07-14 03:12:52.111676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:56.996 [2024-07-14 03:12:52.111711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:56.996 task offset: 29184 on job bdev=Nvme1n1 fails 00:24:56.996 00:24:56.996 Latency(us) 00:24:56.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.996 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.996 Job: Nvme1n1 ended in about 0.85 seconds with error 00:24:56.996 Verification LBA range: start 0x0 length 0x400 00:24:56.996 Nvme1n1 : 0.85 243.98 15.25 75.07 0.00 199428.62 98643.82 187190.23 00:24:56.996 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.996 Job: Nvme2n1 ended in about 0.86 seconds with error 00:24:56.996 Verification LBA range: start 0x0 length 0x400 00:24:56.996 Nvme2n1 : 0.86 339.22 21.20 74.61 0.00 152394.14 76507.21 133596.35 00:24:56.996 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.996 Job: Nvme3n1 ended in about 0.86 seconds with error 00:24:56.996 Verification LBA range: start 0x0 length 0x400 00:24:56.996 Nvme3n1 : 0.86 337.94 21.12 74.32 0.00 151600.46 93206.76 121945.51 00:24:56.996 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.996 Job: Nvme4n1 ended in about 0.86 seconds with error 00:24:56.996 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme4n1 : 0.86 336.64 21.04 74.04 0.00 150849.99 76507.21 129712.73 00:24:56.997 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme5n1 ended in about 0.89 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme5n1 : 0.89 233.20 14.58 71.75 0.00 201594.59 99032.18 198064.36 00:24:56.997 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme6n1 ended in about 0.90 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme6n1 : 0.90 325.05 20.32 71.49 0.00 153675.73 93595.12 129712.73 00:24:56.997 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme7n1 ended in about 0.88 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme7n1 : 0.88 377.60 23.60 72.36 0.00 134137.91 60972.75 113401.55 00:24:56.997 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme8n1 ended in about 0.85 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme8n1 : 0.85 390.92 24.43 74.91 0.00 128060.25 31845.64 111848.11 00:24:56.997 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme9n1 ended in about 0.90 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme9n1 : 0.90 279.24 17.45 71.20 0.00 169362.59 86604.61 136703.24 00:24:56.997 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:56.997 Job: Nvme10n1 ended in about 0.88 seconds with error 00:24:56.997 Verification LBA range: start 0x0 length 0x400 00:24:56.997 Nvme10n1 : 0.88 329.90 20.62 72.56 0.00 145903.03 83886.08 120392.06 00:24:56.997 =================================================================================================================== 00:24:56.997 Total : 3193.69 199.61 732.30 0.00 155860.84 31845.64 198064.36 00:24:56.997 [2024-07-14 03:12:52.138707] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:56.997 [2024-07-14 03:12:52.138784] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:56.997 [2024-07-14 03:12:52.139137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.139299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.139327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x243b110 with addr=10.0.0.2, port=4420 00:24:56.997 [2024-07-14 03:12:52.139347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b110 is same with the state(5) to be set 00:24:56.997 [2024-07-14 03:12:52.139376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b2d80 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.139399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2281410 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.139418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450d90 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.139437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22de530 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.139455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.139468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.139485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.997 [2024-07-14 03:12:52.139513] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.139528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.139542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:56.997 [2024-07-14 03:12:52.139604] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139631] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139651] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139671] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139689] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139710] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.139730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x243b110 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.139916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.139941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.140158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.140322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.140348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24374d0 with addr=10.0.0.2, port=4420 00:24:56.997 [2024-07-14 03:12:52.140373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24374d0 is same with the state(5) to be set 00:24:56.997 [2024-07-14 03:12:52.140523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.140669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.140694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2450960 with addr=10.0.0.2, port=4420 00:24:56.997 [2024-07-14 03:12:52.140711] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2450960 is same with the state(5) to be set 00:24:56.997 [2024-07-14 03:12:52.140858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.141017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.141044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2420d20 with addr=10.0.0.2, port=4420 00:24:56.997 [2024-07-14 03:12:52.141061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2420d20 is same with the state(5) to be set 00:24:56.997 [2024-07-14 03:12:52.141078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.141092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.141106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:56.997 [2024-07-14 03:12:52.141126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.141142] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.141156] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:56.997 [2024-07-14 03:12:52.141174] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.141188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.141201] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:56.997 [2024-07-14 03:12:52.141219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.141234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.141247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:56.997 [2024-07-14 03:12:52.141294] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.141319] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.141338] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.141356] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.141373] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:56.997 [2024-07-14 03:12:52.142205] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142231] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142257] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24374d0 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.142310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2450960 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.142329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2420d20 (9): Bad file descriptor 00:24:56.997 [2024-07-14 03:12:52.142346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.142359] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.142373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:56.997 [2024-07-14 03:12:52.142442] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:56.997 [2024-07-14 03:12:52.142468] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.997 [2024-07-14 03:12:52.142485] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142515] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.142532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.142546] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:56.997 [2024-07-14 03:12:52.142563] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.142577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.142591] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:56.997 [2024-07-14 03:12:52.142608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:56.997 [2024-07-14 03:12:52.142622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:56.997 [2024-07-14 03:12:52.142635] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:56.997 [2024-07-14 03:12:52.142698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142731] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.997 [2024-07-14 03:12:52.142906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.997 [2024-07-14 03:12:52.143065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.998 [2024-07-14 03:12:52.143090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22b4c80 with addr=10.0.0.2, port=4420 00:24:56.998 [2024-07-14 03:12:52.143106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22b4c80 is same with the state(5) to be set 00:24:56.998 [2024-07-14 03:12:52.143247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.998 [2024-07-14 03:12:52.143385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.998 [2024-07-14 03:12:52.143410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2288a30 with addr=10.0.0.2, port=4420 00:24:56.998 [2024-07-14 03:12:52.143426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2288a30 is same with the state(5) to be set 00:24:56.998 [2024-07-14 03:12:52.143473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b4c80 (9): Bad file descriptor 00:24:56.998 [2024-07-14 03:12:52.143497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2288a30 (9): Bad file descriptor 00:24:56.998 [2024-07-14 03:12:52.143537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:56.998 [2024-07-14 03:12:52.143560] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:56.998 [2024-07-14 03:12:52.143576] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:56.998 [2024-07-14 03:12:52.143593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.998 [2024-07-14 03:12:52.143607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.998 [2024-07-14 03:12:52.143621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.998 [2024-07-14 03:12:52.143657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:56.998 [2024-07-14 03:12:52.143683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.575 03:12:52 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:57.575 03:12:52 -- target/shutdown.sh@138 -- # sleep 1 00:24:58.513 03:12:53 -- target/shutdown.sh@141 -- # kill -9 2077535 00:24:58.513 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2077535) - No such process 00:24:58.513 03:12:53 -- target/shutdown.sh@141 -- # true 00:24:58.513 03:12:53 -- target/shutdown.sh@143 -- # stoptarget 00:24:58.513 03:12:53 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:58.513 03:12:53 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:58.513 03:12:53 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:58.513 03:12:53 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:58.513 03:12:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:58.513 03:12:53 -- nvmf/common.sh@116 -- # sync 00:24:58.513 03:12:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:58.513 03:12:53 -- nvmf/common.sh@119 -- # set +e 00:24:58.513 03:12:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:58.513 03:12:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:58.513 rmmod nvme_tcp 00:24:58.513 rmmod nvme_fabrics 00:24:58.513 rmmod nvme_keyring 00:24:58.513 03:12:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:58.513 03:12:53 -- nvmf/common.sh@123 -- # set -e 00:24:58.513 03:12:53 -- nvmf/common.sh@124 -- # return 0 00:24:58.513 03:12:53 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:58.513 03:12:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:58.513 03:12:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:58.513 03:12:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:58.513 03:12:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.513 03:12:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:58.513 03:12:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.513 03:12:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.513 03:12:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.412 03:12:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:00.412 00:25:00.412 real 0m7.812s 00:25:00.412 user 0m19.564s 00:25:00.412 sys 0m1.555s 00:25:00.412 03:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.412 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:00.412 ************************************ 00:25:00.412 END TEST nvmf_shutdown_tc3 00:25:00.412 ************************************ 00:25:00.412 03:12:55 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:00.412 00:25:00.412 real 0m28.179s 00:25:00.412 user 1m20.928s 00:25:00.412 sys 0m6.445s 00:25:00.412 03:12:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:00.412 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:00.412 ************************************ 00:25:00.412 END TEST nvmf_shutdown 00:25:00.412 ************************************ 00:25:00.669 03:12:55 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:25:00.669 03:12:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:00.669 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:00.669 03:12:55 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:25:00.669 03:12:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:00.669 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:00.669 03:12:55 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:25:00.669 03:12:55 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:00.669 03:12:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:00.669 03:12:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:00.669 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:00.669 ************************************ 00:25:00.669 START TEST nvmf_multicontroller 00:25:00.669 ************************************ 00:25:00.669 03:12:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:00.669 * Looking for test storage... 00:25:00.669 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:00.669 03:12:55 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:00.669 03:12:55 -- nvmf/common.sh@7 -- # uname -s 00:25:00.669 03:12:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:00.669 03:12:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:00.669 03:12:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:00.669 03:12:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:00.669 03:12:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:00.669 03:12:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:00.669 03:12:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:00.669 03:12:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:00.669 03:12:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:00.669 03:12:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:00.669 03:12:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:00.669 03:12:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:00.669 03:12:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:00.669 03:12:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:00.669 03:12:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:00.669 03:12:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:00.669 03:12:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:00.669 03:12:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:00.669 03:12:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:00.670 03:12:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.670 03:12:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.670 03:12:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.670 03:12:55 -- paths/export.sh@5 -- # export PATH 00:25:00.670 03:12:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:00.670 03:12:55 -- nvmf/common.sh@46 -- # : 0 00:25:00.670 03:12:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:00.670 03:12:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:00.670 03:12:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:00.670 03:12:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:00.670 03:12:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:00.670 03:12:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:00.670 03:12:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:00.670 03:12:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:00.670 03:12:55 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:00.670 03:12:55 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:00.670 03:12:55 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:00.670 03:12:55 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:00.670 03:12:55 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.670 03:12:55 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:00.670 03:12:55 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:00.670 03:12:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:00.670 03:12:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:00.670 03:12:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:00.670 03:12:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:00.670 03:12:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:00.670 03:12:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:00.670 03:12:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:00.670 03:12:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:00.670 03:12:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:00.670 03:12:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:00.670 03:12:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:00.670 03:12:55 -- common/autotest_common.sh@10 -- # set +x 00:25:02.569 03:12:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:02.569 03:12:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:02.569 03:12:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:02.569 03:12:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:02.569 03:12:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:02.569 03:12:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:02.570 03:12:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:02.570 03:12:57 -- nvmf/common.sh@294 -- # net_devs=() 00:25:02.570 03:12:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:02.570 03:12:57 -- nvmf/common.sh@295 -- # e810=() 00:25:02.570 03:12:57 -- nvmf/common.sh@295 -- # local -ga e810 00:25:02.570 03:12:57 -- nvmf/common.sh@296 -- # x722=() 00:25:02.570 03:12:57 -- nvmf/common.sh@296 -- # local -ga x722 00:25:02.570 03:12:57 -- nvmf/common.sh@297 -- # mlx=() 00:25:02.570 03:12:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:02.570 03:12:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.570 03:12:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.570 03:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:02.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:02.570 03:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:02.570 03:12:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:02.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:02.570 03:12:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.570 03:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.570 03:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.570 03:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:02.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:02.570 03:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:02.570 03:12:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.570 03:12:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.570 03:12:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:02.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:02.570 03:12:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:02.570 03:12:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:02.570 03:12:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.570 03:12:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.570 03:12:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:02.570 03:12:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.570 03:12:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.570 03:12:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:02.570 03:12:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.570 03:12:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.570 03:12:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:02.570 03:12:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:02.570 03:12:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.570 03:12:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.570 03:12:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.570 03:12:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.570 03:12:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:02.570 03:12:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.570 03:12:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.570 03:12:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.570 03:12:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:02.570 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.570 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:25:02.570 00:25:02.570 --- 10.0.0.2 ping statistics --- 00:25:02.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.570 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:25:02.570 03:12:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.570 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.570 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:25:02.570 00:25:02.570 --- 10.0.0.1 ping statistics --- 00:25:02.570 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.570 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:02.570 03:12:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.570 03:12:57 -- nvmf/common.sh@410 -- # return 0 00:25:02.570 03:12:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:02.570 03:12:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.570 03:12:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:02.570 03:12:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.570 03:12:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:02.570 03:12:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:02.828 03:12:57 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:02.828 03:12:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:02.828 03:12:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:02.828 03:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:02.828 03:12:57 -- nvmf/common.sh@469 -- # nvmfpid=2080077 00:25:02.828 03:12:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:02.828 03:12:57 -- nvmf/common.sh@470 -- # waitforlisten 2080077 00:25:02.828 03:12:57 -- common/autotest_common.sh@819 -- # '[' -z 2080077 ']' 00:25:02.828 03:12:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.828 03:12:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:02.828 03:12:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.828 03:12:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:02.828 03:12:57 -- common/autotest_common.sh@10 -- # set +x 00:25:02.828 [2024-07-14 03:12:57.890384] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:02.828 [2024-07-14 03:12:57.890453] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.828 EAL: No free 2048 kB hugepages reported on node 1 00:25:02.828 [2024-07-14 03:12:57.955650] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:02.828 [2024-07-14 03:12:58.039542] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:02.828 [2024-07-14 03:12:58.039691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.828 [2024-07-14 03:12:58.039710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.828 [2024-07-14 03:12:58.039723] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.828 [2024-07-14 03:12:58.039808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.828 [2024-07-14 03:12:58.039887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:02.828 [2024-07-14 03:12:58.039892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.761 03:12:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:03.761 03:12:58 -- common/autotest_common.sh@852 -- # return 0 00:25:03.761 03:12:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:03.761 03:12:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:03.761 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.761 03:12:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.761 03:12:58 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.761 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.761 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.761 [2024-07-14 03:12:58.884245] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.761 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 Malloc0 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 [2024-07-14 03:12:58.942081] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 [2024-07-14 03:12:58.949967] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 Malloc1 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:58 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:03.762 03:12:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:58 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:59 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:03.762 03:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:03.762 03:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:03.762 03:12:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:03.762 03:12:59 -- host/multicontroller.sh@44 -- # bdevperf_pid=2080232 00:25:03.762 03:12:59 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:03.762 03:12:59 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:03.762 03:12:59 -- host/multicontroller.sh@47 -- # waitforlisten 2080232 /var/tmp/bdevperf.sock 00:25:03.762 03:12:59 -- common/autotest_common.sh@819 -- # '[' -z 2080232 ']' 00:25:03.762 03:12:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:03.762 03:12:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:03.762 03:12:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:03.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:03.762 03:12:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:03.762 03:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 03:12:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.134 03:12:59 -- common/autotest_common.sh@852 -- # return 0 00:25:05.134 03:12:59 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:05.134 03:12:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.134 03:12:59 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 NVMe0n1 00:25:05.134 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.134 03:13:00 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.134 03:13:00 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:05.134 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.134 1 00:25:05.134 03:13:00 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:05.134 03:13:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.134 03:13:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:05.134 03:13:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:05.134 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 request: 00:25:05.134 { 00:25:05.134 "name": "NVMe0", 00:25:05.134 "trtype": "tcp", 00:25:05.134 "traddr": "10.0.0.2", 00:25:05.134 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:05.134 "hostaddr": "10.0.0.2", 00:25:05.134 "hostsvcid": "60000", 00:25:05.134 "adrfam": "ipv4", 00:25:05.134 "trsvcid": "4420", 00:25:05.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.134 "method": "bdev_nvme_attach_controller", 00:25:05.134 "req_id": 1 00:25:05.134 } 00:25:05.134 Got JSON-RPC error response 00:25:05.134 response: 00:25:05.134 { 00:25:05.134 "code": -114, 00:25:05.134 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:05.134 } 00:25:05.134 03:13:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # es=1 00:25:05.134 03:13:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.134 03:13:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.134 03:13:00 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:05.134 03:13:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.134 03:13:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:05.134 03:13:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:05.134 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 request: 00:25:05.134 { 00:25:05.134 "name": "NVMe0", 00:25:05.134 "trtype": "tcp", 00:25:05.134 "traddr": "10.0.0.2", 00:25:05.134 "hostaddr": "10.0.0.2", 00:25:05.134 "hostsvcid": "60000", 00:25:05.134 "adrfam": "ipv4", 00:25:05.134 "trsvcid": "4420", 00:25:05.134 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:05.134 "method": "bdev_nvme_attach_controller", 00:25:05.134 "req_id": 1 00:25:05.134 } 00:25:05.134 Got JSON-RPC error response 00:25:05.134 response: 00:25:05.134 { 00:25:05.134 "code": -114, 00:25:05.134 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:05.134 } 00:25:05.134 03:13:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # es=1 00:25:05.134 03:13:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.134 03:13:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.134 03:13:00 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.134 03:13:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.134 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.134 request: 00:25:05.134 { 00:25:05.134 "name": "NVMe0", 00:25:05.134 "trtype": "tcp", 00:25:05.134 "traddr": "10.0.0.2", 00:25:05.134 "hostaddr": "10.0.0.2", 00:25:05.134 "hostsvcid": "60000", 00:25:05.134 "adrfam": "ipv4", 00:25:05.134 "trsvcid": "4420", 00:25:05.134 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.134 "multipath": "disable", 00:25:05.134 "method": "bdev_nvme_attach_controller", 00:25:05.134 "req_id": 1 00:25:05.134 } 00:25:05.134 Got JSON-RPC error response 00:25:05.134 response: 00:25:05.134 { 00:25:05.134 "code": -114, 00:25:05.134 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:05.134 } 00:25:05.134 03:13:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@643 -- # es=1 00:25:05.134 03:13:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.134 03:13:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.134 03:13:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.134 03:13:00 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:05.134 03:13:00 -- common/autotest_common.sh@640 -- # local es=0 00:25:05.134 03:13:00 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:05.134 03:13:00 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.134 03:13:00 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:05.135 03:13:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:05.135 03:13:00 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:05.135 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.135 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.135 request: 00:25:05.135 { 00:25:05.135 "name": "NVMe0", 00:25:05.135 "trtype": "tcp", 00:25:05.135 "traddr": "10.0.0.2", 00:25:05.135 "hostaddr": "10.0.0.2", 00:25:05.135 "hostsvcid": "60000", 00:25:05.135 "adrfam": "ipv4", 00:25:05.135 "trsvcid": "4420", 00:25:05.135 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:05.135 "multipath": "failover", 00:25:05.135 "method": "bdev_nvme_attach_controller", 00:25:05.135 "req_id": 1 00:25:05.135 } 00:25:05.135 Got JSON-RPC error response 00:25:05.135 response: 00:25:05.135 { 00:25:05.135 "code": -114, 00:25:05.135 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:05.135 } 00:25:05.135 03:13:00 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:05.135 03:13:00 -- common/autotest_common.sh@643 -- # es=1 00:25:05.135 03:13:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:05.135 03:13:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:05.135 03:13:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:05.135 03:13:00 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.135 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.135 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 00:25:05.392 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.392 03:13:00 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:05.392 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.392 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.392 03:13:00 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:05.392 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.392 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 00:25:05.392 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.392 03:13:00 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:05.392 03:13:00 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:05.392 03:13:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:05.392 03:13:00 -- common/autotest_common.sh@10 -- # set +x 00:25:05.392 03:13:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:05.392 03:13:00 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:05.392 03:13:00 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:06.766 0 00:25:06.766 03:13:01 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:06.766 03:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.766 03:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:06.766 03:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.766 03:13:01 -- host/multicontroller.sh@100 -- # killprocess 2080232 00:25:06.766 03:13:01 -- common/autotest_common.sh@926 -- # '[' -z 2080232 ']' 00:25:06.766 03:13:01 -- common/autotest_common.sh@930 -- # kill -0 2080232 00:25:06.766 03:13:01 -- common/autotest_common.sh@931 -- # uname 00:25:06.766 03:13:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:06.766 03:13:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2080232 00:25:06.766 03:13:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:06.766 03:13:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:06.766 03:13:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2080232' 00:25:06.766 killing process with pid 2080232 00:25:06.766 03:13:01 -- common/autotest_common.sh@945 -- # kill 2080232 00:25:06.766 03:13:01 -- common/autotest_common.sh@950 -- # wait 2080232 00:25:06.766 03:13:01 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.766 03:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.766 03:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:06.766 03:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.766 03:13:01 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:06.766 03:13:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.766 03:13:01 -- common/autotest_common.sh@10 -- # set +x 00:25:06.766 03:13:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.766 03:13:01 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:06.766 03:13:01 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.766 03:13:01 -- common/autotest_common.sh@1597 -- # read -r file 00:25:06.766 03:13:01 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:06.766 03:13:01 -- common/autotest_common.sh@1596 -- # sort -u 00:25:06.766 03:13:01 -- common/autotest_common.sh@1598 -- # cat 00:25:06.766 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:06.766 [2024-07-14 03:12:59.050560] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:06.766 [2024-07-14 03:12:59.050638] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080232 ] 00:25:06.766 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.766 [2024-07-14 03:12:59.109449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.766 [2024-07-14 03:12:59.194295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.766 [2024-07-14 03:13:00.513067] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 2b10e067-03a2-4ebd-bf86-942f27a8be8f already exists 00:25:06.766 [2024-07-14 03:13:00.513111] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:2b10e067-03a2-4ebd-bf86-942f27a8be8f alias for bdev NVMe1n1 00:25:06.766 [2024-07-14 03:13:00.513128] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:06.766 Running I/O for 1 seconds... 00:25:06.766 00:25:06.766 Latency(us) 00:25:06.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.766 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:06.766 NVMe0n1 : 1.00 19716.97 77.02 0.00 0.00 6475.64 2233.08 9563.40 00:25:06.766 =================================================================================================================== 00:25:06.766 Total : 19716.97 77.02 0.00 0.00 6475.64 2233.08 9563.40 00:25:06.766 Received shutdown signal, test time was about 1.000000 seconds 00:25:06.766 00:25:06.766 Latency(us) 00:25:06.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.766 =================================================================================================================== 00:25:06.766 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.766 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:06.766 03:13:01 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.766 03:13:01 -- common/autotest_common.sh@1597 -- # read -r file 00:25:06.766 03:13:01 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:06.766 03:13:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.766 03:13:01 -- nvmf/common.sh@116 -- # sync 00:25:06.766 03:13:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:06.766 03:13:01 -- nvmf/common.sh@119 -- # set +e 00:25:06.766 03:13:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.766 03:13:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:06.766 rmmod nvme_tcp 00:25:06.766 rmmod nvme_fabrics 00:25:06.766 rmmod nvme_keyring 00:25:06.766 03:13:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.766 03:13:02 -- nvmf/common.sh@123 -- # set -e 00:25:06.766 03:13:02 -- nvmf/common.sh@124 -- # return 0 00:25:06.766 03:13:02 -- nvmf/common.sh@477 -- # '[' -n 2080077 ']' 00:25:06.766 03:13:02 -- nvmf/common.sh@478 -- # killprocess 2080077 00:25:06.766 03:13:02 -- common/autotest_common.sh@926 -- # '[' -z 2080077 ']' 00:25:06.766 03:13:02 -- common/autotest_common.sh@930 -- # kill -0 2080077 00:25:06.766 03:13:02 -- common/autotest_common.sh@931 -- # uname 00:25:07.025 03:13:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:07.025 03:13:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2080077 00:25:07.025 03:13:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:07.025 03:13:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:07.025 03:13:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2080077' 00:25:07.025 killing process with pid 2080077 00:25:07.025 03:13:02 -- common/autotest_common.sh@945 -- # kill 2080077 00:25:07.025 03:13:02 -- common/autotest_common.sh@950 -- # wait 2080077 00:25:07.285 03:13:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.285 03:13:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:07.285 03:13:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:07.285 03:13:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.285 03:13:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:07.285 03:13:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.285 03:13:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.285 03:13:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.184 03:13:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:09.184 00:25:09.184 real 0m8.674s 00:25:09.184 user 0m16.251s 00:25:09.185 sys 0m2.559s 00:25:09.185 03:13:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:09.185 03:13:04 -- common/autotest_common.sh@10 -- # set +x 00:25:09.185 ************************************ 00:25:09.185 END TEST nvmf_multicontroller 00:25:09.185 ************************************ 00:25:09.185 03:13:04 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:09.185 03:13:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:09.185 03:13:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:09.185 03:13:04 -- common/autotest_common.sh@10 -- # set +x 00:25:09.185 ************************************ 00:25:09.185 START TEST nvmf_aer 00:25:09.185 ************************************ 00:25:09.185 03:13:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:09.444 * Looking for test storage... 00:25:09.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:09.444 03:13:04 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:09.444 03:13:04 -- nvmf/common.sh@7 -- # uname -s 00:25:09.444 03:13:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:09.444 03:13:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:09.444 03:13:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:09.444 03:13:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:09.444 03:13:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:09.444 03:13:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:09.444 03:13:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:09.444 03:13:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:09.444 03:13:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:09.444 03:13:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:09.444 03:13:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.444 03:13:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:09.444 03:13:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:09.444 03:13:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:09.444 03:13:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:09.444 03:13:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:09.444 03:13:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:09.444 03:13:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:09.444 03:13:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:09.444 03:13:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.444 03:13:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.444 03:13:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.444 03:13:04 -- paths/export.sh@5 -- # export PATH 00:25:09.444 03:13:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:09.444 03:13:04 -- nvmf/common.sh@46 -- # : 0 00:25:09.444 03:13:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:09.444 03:13:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:09.444 03:13:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:09.444 03:13:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:09.444 03:13:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:09.444 03:13:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:09.444 03:13:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:09.444 03:13:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:09.444 03:13:04 -- host/aer.sh@11 -- # nvmftestinit 00:25:09.444 03:13:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:09.444 03:13:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:09.444 03:13:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:09.444 03:13:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:09.444 03:13:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:09.444 03:13:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:09.444 03:13:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:09.444 03:13:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:09.444 03:13:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:09.444 03:13:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:09.444 03:13:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:09.444 03:13:04 -- common/autotest_common.sh@10 -- # set +x 00:25:11.345 03:13:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:11.345 03:13:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:11.345 03:13:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:11.345 03:13:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:11.345 03:13:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:11.345 03:13:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:11.345 03:13:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:11.345 03:13:06 -- nvmf/common.sh@294 -- # net_devs=() 00:25:11.345 03:13:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:11.345 03:13:06 -- nvmf/common.sh@295 -- # e810=() 00:25:11.345 03:13:06 -- nvmf/common.sh@295 -- # local -ga e810 00:25:11.345 03:13:06 -- nvmf/common.sh@296 -- # x722=() 00:25:11.345 03:13:06 -- nvmf/common.sh@296 -- # local -ga x722 00:25:11.345 03:13:06 -- nvmf/common.sh@297 -- # mlx=() 00:25:11.345 03:13:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:11.345 03:13:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.345 03:13:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:11.345 03:13:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:11.345 03:13:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:11.345 03:13:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.345 03:13:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:11.345 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:11.345 03:13:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:11.345 03:13:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:11.345 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:11.345 03:13:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:11.345 03:13:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:11.345 03:13:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.345 03:13:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.345 03:13:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.345 03:13:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.345 03:13:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:11.345 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:11.345 03:13:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.345 03:13:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:11.345 03:13:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.345 03:13:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:11.346 03:13:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.346 03:13:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:11.346 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:11.346 03:13:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.346 03:13:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:11.346 03:13:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:11.346 03:13:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:11.346 03:13:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:11.346 03:13:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:11.346 03:13:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.346 03:13:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.346 03:13:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.346 03:13:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:11.346 03:13:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.346 03:13:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.346 03:13:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:11.346 03:13:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.346 03:13:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.346 03:13:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:11.346 03:13:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:11.346 03:13:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.346 03:13:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.346 03:13:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.346 03:13:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.346 03:13:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:11.346 03:13:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.346 03:13:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.346 03:13:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.346 03:13:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:11.346 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.346 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:25:11.346 00:25:11.346 --- 10.0.0.2 ping statistics --- 00:25:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.346 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:25:11.346 03:13:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.346 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.346 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.255 ms 00:25:11.346 00:25:11.346 --- 10.0.0.1 ping statistics --- 00:25:11.346 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.346 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:25:11.346 03:13:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.346 03:13:06 -- nvmf/common.sh@410 -- # return 0 00:25:11.346 03:13:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:11.346 03:13:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.346 03:13:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:11.346 03:13:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:11.346 03:13:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.346 03:13:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:11.346 03:13:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:11.346 03:13:06 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:11.346 03:13:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:11.346 03:13:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:11.346 03:13:06 -- common/autotest_common.sh@10 -- # set +x 00:25:11.346 03:13:06 -- nvmf/common.sh@469 -- # nvmfpid=2082516 00:25:11.346 03:13:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:11.346 03:13:06 -- nvmf/common.sh@470 -- # waitforlisten 2082516 00:25:11.346 03:13:06 -- common/autotest_common.sh@819 -- # '[' -z 2082516 ']' 00:25:11.346 03:13:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.346 03:13:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:11.346 03:13:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.346 03:13:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:11.346 03:13:06 -- common/autotest_common.sh@10 -- # set +x 00:25:11.346 [2024-07-14 03:13:06.588081] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:11.346 [2024-07-14 03:13:06.588186] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:11.604 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.604 [2024-07-14 03:13:06.657416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:11.604 [2024-07-14 03:13:06.741797] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:11.604 [2024-07-14 03:13:06.741961] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:11.604 [2024-07-14 03:13:06.741979] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:11.604 [2024-07-14 03:13:06.741991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:11.604 [2024-07-14 03:13:06.742054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.604 [2024-07-14 03:13:06.742115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:11.604 [2024-07-14 03:13:06.742180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:11.604 [2024-07-14 03:13:06.742183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.536 03:13:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:12.536 03:13:07 -- common/autotest_common.sh@852 -- # return 0 00:25:12.536 03:13:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:12.536 03:13:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 03:13:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:12.536 03:13:07 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 [2024-07-14 03:13:07.533410] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 Malloc0 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 [2024-07-14 03:13:07.587258] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:12.536 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.536 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.536 [2024-07-14 03:13:07.594973] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:12.536 [ 00:25:12.536 { 00:25:12.536 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:12.536 "subtype": "Discovery", 00:25:12.536 "listen_addresses": [], 00:25:12.536 "allow_any_host": true, 00:25:12.536 "hosts": [] 00:25:12.536 }, 00:25:12.536 { 00:25:12.536 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.536 "subtype": "NVMe", 00:25:12.536 "listen_addresses": [ 00:25:12.536 { 00:25:12.536 "transport": "TCP", 00:25:12.536 "trtype": "TCP", 00:25:12.536 "adrfam": "IPv4", 00:25:12.536 "traddr": "10.0.0.2", 00:25:12.536 "trsvcid": "4420" 00:25:12.536 } 00:25:12.536 ], 00:25:12.536 "allow_any_host": true, 00:25:12.536 "hosts": [], 00:25:12.536 "serial_number": "SPDK00000000000001", 00:25:12.536 "model_number": "SPDK bdev Controller", 00:25:12.536 "max_namespaces": 2, 00:25:12.536 "min_cntlid": 1, 00:25:12.536 "max_cntlid": 65519, 00:25:12.536 "namespaces": [ 00:25:12.536 { 00:25:12.536 "nsid": 1, 00:25:12.536 "bdev_name": "Malloc0", 00:25:12.536 "name": "Malloc0", 00:25:12.536 "nguid": "8C6F1FBC5CE7401984DAB2459D7A93D1", 00:25:12.536 "uuid": "8c6f1fbc-5ce7-4019-84da-b2459d7a93d1" 00:25:12.536 } 00:25:12.536 ] 00:25:12.536 } 00:25:12.536 ] 00:25:12.536 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.536 03:13:07 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:12.536 03:13:07 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:12.536 03:13:07 -- host/aer.sh@33 -- # aerpid=2082633 00:25:12.536 03:13:07 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:12.536 03:13:07 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:12.536 03:13:07 -- common/autotest_common.sh@1244 -- # local i=0 00:25:12.536 03:13:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:12.536 03:13:07 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:12.536 03:13:07 -- common/autotest_common.sh@1247 -- # i=1 00:25:12.536 03:13:07 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:12.536 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.536 03:13:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:12.536 03:13:07 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:12.536 03:13:07 -- common/autotest_common.sh@1247 -- # i=2 00:25:12.536 03:13:07 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:12.795 03:13:07 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:12.795 03:13:07 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:12.795 03:13:07 -- common/autotest_common.sh@1255 -- # return 0 00:25:12.795 03:13:07 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 Malloc1 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 Asynchronous Event Request test 00:25:12.795 Attaching to 10.0.0.2 00:25:12.795 Attached to 10.0.0.2 00:25:12.795 Registering asynchronous event callbacks... 00:25:12.795 Starting namespace attribute notice tests for all controllers... 00:25:12.795 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:12.795 aer_cb - Changed Namespace 00:25:12.795 Cleaning up... 00:25:12.795 [ 00:25:12.795 { 00:25:12.795 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:12.795 "subtype": "Discovery", 00:25:12.795 "listen_addresses": [], 00:25:12.795 "allow_any_host": true, 00:25:12.795 "hosts": [] 00:25:12.795 }, 00:25:12.795 { 00:25:12.795 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:12.795 "subtype": "NVMe", 00:25:12.795 "listen_addresses": [ 00:25:12.795 { 00:25:12.795 "transport": "TCP", 00:25:12.795 "trtype": "TCP", 00:25:12.795 "adrfam": "IPv4", 00:25:12.795 "traddr": "10.0.0.2", 00:25:12.795 "trsvcid": "4420" 00:25:12.795 } 00:25:12.795 ], 00:25:12.795 "allow_any_host": true, 00:25:12.795 "hosts": [], 00:25:12.795 "serial_number": "SPDK00000000000001", 00:25:12.795 "model_number": "SPDK bdev Controller", 00:25:12.795 "max_namespaces": 2, 00:25:12.795 "min_cntlid": 1, 00:25:12.795 "max_cntlid": 65519, 00:25:12.795 "namespaces": [ 00:25:12.795 { 00:25:12.795 "nsid": 1, 00:25:12.795 "bdev_name": "Malloc0", 00:25:12.795 "name": "Malloc0", 00:25:12.795 "nguid": "8C6F1FBC5CE7401984DAB2459D7A93D1", 00:25:12.795 "uuid": "8c6f1fbc-5ce7-4019-84da-b2459d7a93d1" 00:25:12.795 }, 00:25:12.795 { 00:25:12.795 "nsid": 2, 00:25:12.795 "bdev_name": "Malloc1", 00:25:12.795 "name": "Malloc1", 00:25:12.795 "nguid": "D4053444779B4855BFF934E53CC854C8", 00:25:12.795 "uuid": "d4053444-779b-4855-bff9-34e53cc854c8" 00:25:12.795 } 00:25:12.795 ] 00:25:12.795 } 00:25:12.795 ] 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@43 -- # wait 2082633 00:25:12.795 03:13:07 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.795 03:13:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:12.795 03:13:07 -- common/autotest_common.sh@10 -- # set +x 00:25:12.795 03:13:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:12.795 03:13:07 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:12.795 03:13:07 -- host/aer.sh@51 -- # nvmftestfini 00:25:12.795 03:13:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:12.795 03:13:07 -- nvmf/common.sh@116 -- # sync 00:25:12.795 03:13:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:12.795 03:13:07 -- nvmf/common.sh@119 -- # set +e 00:25:12.795 03:13:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:12.795 03:13:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:12.795 rmmod nvme_tcp 00:25:12.795 rmmod nvme_fabrics 00:25:12.795 rmmod nvme_keyring 00:25:12.795 03:13:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:12.795 03:13:07 -- nvmf/common.sh@123 -- # set -e 00:25:12.795 03:13:07 -- nvmf/common.sh@124 -- # return 0 00:25:12.795 03:13:07 -- nvmf/common.sh@477 -- # '[' -n 2082516 ']' 00:25:12.795 03:13:07 -- nvmf/common.sh@478 -- # killprocess 2082516 00:25:12.795 03:13:07 -- common/autotest_common.sh@926 -- # '[' -z 2082516 ']' 00:25:12.795 03:13:07 -- common/autotest_common.sh@930 -- # kill -0 2082516 00:25:12.795 03:13:07 -- common/autotest_common.sh@931 -- # uname 00:25:12.795 03:13:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:12.795 03:13:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2082516 00:25:12.795 03:13:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:12.795 03:13:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:12.795 03:13:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2082516' 00:25:12.795 killing process with pid 2082516 00:25:12.795 03:13:08 -- common/autotest_common.sh@945 -- # kill 2082516 00:25:12.795 [2024-07-14 03:13:08.025945] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:12.795 03:13:08 -- common/autotest_common.sh@950 -- # wait 2082516 00:25:13.055 03:13:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:13.055 03:13:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:13.055 03:13:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:13.055 03:13:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:13.055 03:13:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:13.055 03:13:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.055 03:13:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.055 03:13:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.592 03:13:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:15.592 00:25:15.592 real 0m5.877s 00:25:15.592 user 0m6.707s 00:25:15.592 sys 0m1.917s 00:25:15.592 03:13:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.592 03:13:10 -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 ************************************ 00:25:15.592 END TEST nvmf_aer 00:25:15.592 ************************************ 00:25:15.592 03:13:10 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:15.592 03:13:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:15.592 03:13:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:15.592 03:13:10 -- common/autotest_common.sh@10 -- # set +x 00:25:15.592 ************************************ 00:25:15.592 START TEST nvmf_async_init 00:25:15.592 ************************************ 00:25:15.592 03:13:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:15.592 * Looking for test storage... 00:25:15.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:15.592 03:13:10 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.592 03:13:10 -- nvmf/common.sh@7 -- # uname -s 00:25:15.592 03:13:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.592 03:13:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.592 03:13:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.592 03:13:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.592 03:13:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.592 03:13:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.592 03:13:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.592 03:13:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.592 03:13:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.592 03:13:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.592 03:13:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.592 03:13:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:15.592 03:13:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.592 03:13:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.592 03:13:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.592 03:13:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.592 03:13:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.592 03:13:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.592 03:13:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.592 03:13:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.592 03:13:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.592 03:13:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.592 03:13:10 -- paths/export.sh@5 -- # export PATH 00:25:15.592 03:13:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.592 03:13:10 -- nvmf/common.sh@46 -- # : 0 00:25:15.592 03:13:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.592 03:13:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.592 03:13:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.592 03:13:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.592 03:13:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.592 03:13:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.592 03:13:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.592 03:13:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.592 03:13:10 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:15.592 03:13:10 -- host/async_init.sh@14 -- # null_block_size=512 00:25:15.592 03:13:10 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:15.592 03:13:10 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:15.592 03:13:10 -- host/async_init.sh@20 -- # uuidgen 00:25:15.592 03:13:10 -- host/async_init.sh@20 -- # tr -d - 00:25:15.592 03:13:10 -- host/async_init.sh@20 -- # nguid=066951b50a3d4d428e0a1f6be2c9657d 00:25:15.592 03:13:10 -- host/async_init.sh@22 -- # nvmftestinit 00:25:15.592 03:13:10 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:15.592 03:13:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.592 03:13:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:15.592 03:13:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:15.592 03:13:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:15.592 03:13:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.592 03:13:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.592 03:13:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.592 03:13:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:15.592 03:13:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:15.592 03:13:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:15.592 03:13:10 -- common/autotest_common.sh@10 -- # set +x 00:25:17.543 03:13:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:17.543 03:13:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:17.543 03:13:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:17.543 03:13:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:17.543 03:13:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:17.543 03:13:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:17.543 03:13:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:17.543 03:13:12 -- nvmf/common.sh@294 -- # net_devs=() 00:25:17.543 03:13:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:17.543 03:13:12 -- nvmf/common.sh@295 -- # e810=() 00:25:17.543 03:13:12 -- nvmf/common.sh@295 -- # local -ga e810 00:25:17.543 03:13:12 -- nvmf/common.sh@296 -- # x722=() 00:25:17.543 03:13:12 -- nvmf/common.sh@296 -- # local -ga x722 00:25:17.543 03:13:12 -- nvmf/common.sh@297 -- # mlx=() 00:25:17.543 03:13:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:17.543 03:13:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:17.543 03:13:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:17.543 03:13:12 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:17.543 03:13:12 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:17.543 03:13:12 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:17.544 03:13:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:17.544 03:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:17.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:17.544 03:13:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:17.544 03:13:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:17.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:17.544 03:13:12 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:17.544 03:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.544 03:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.544 03:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:17.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:17.544 03:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.544 03:13:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:17.544 03:13:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:17.544 03:13:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:17.544 03:13:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:17.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:17.544 03:13:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:17.544 03:13:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:17.544 03:13:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:17.544 03:13:12 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:17.544 03:13:12 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:17.544 03:13:12 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:17.544 03:13:12 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:17.544 03:13:12 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:17.544 03:13:12 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:17.544 03:13:12 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:17.544 03:13:12 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:17.544 03:13:12 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:17.544 03:13:12 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:17.544 03:13:12 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:17.544 03:13:12 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:17.544 03:13:12 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:17.544 03:13:12 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:17.544 03:13:12 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:17.544 03:13:12 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:17.544 03:13:12 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:17.544 03:13:12 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:17.544 03:13:12 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:17.544 03:13:12 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:17.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:17.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:25:17.544 00:25:17.544 --- 10.0.0.2 ping statistics --- 00:25:17.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.544 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:25:17.544 03:13:12 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:17.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:17.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:25:17.544 00:25:17.544 --- 10.0.0.1 ping statistics --- 00:25:17.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:17.544 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:25:17.544 03:13:12 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:17.544 03:13:12 -- nvmf/common.sh@410 -- # return 0 00:25:17.544 03:13:12 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:17.544 03:13:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:17.544 03:13:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:17.544 03:13:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:17.544 03:13:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:17.544 03:13:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:17.544 03:13:12 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:17.544 03:13:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:17.544 03:13:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:17.544 03:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:17.544 03:13:12 -- nvmf/common.sh@469 -- # nvmfpid=2084703 00:25:17.544 03:13:12 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:17.544 03:13:12 -- nvmf/common.sh@470 -- # waitforlisten 2084703 00:25:17.544 03:13:12 -- common/autotest_common.sh@819 -- # '[' -z 2084703 ']' 00:25:17.544 03:13:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:17.544 03:13:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:17.544 03:13:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:17.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:17.544 03:13:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:17.544 03:13:12 -- common/autotest_common.sh@10 -- # set +x 00:25:17.544 [2024-07-14 03:13:12.594921] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:17.544 [2024-07-14 03:13:12.595010] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:17.544 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.544 [2024-07-14 03:13:12.663798] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.544 [2024-07-14 03:13:12.753032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:17.544 [2024-07-14 03:13:12.753217] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:17.544 [2024-07-14 03:13:12.753237] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:17.544 [2024-07-14 03:13:12.753254] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:17.544 [2024-07-14 03:13:12.753286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.487 03:13:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:18.487 03:13:13 -- common/autotest_common.sh@852 -- # return 0 00:25:18.487 03:13:13 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:18.487 03:13:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:18.487 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 03:13:13 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:18.487 03:13:13 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:18.487 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.487 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 [2024-07-14 03:13:13.543445] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:18.487 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.487 03:13:13 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:18.487 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.487 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 null0 00:25:18.487 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.487 03:13:13 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:18.487 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.487 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.487 03:13:13 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:18.487 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.487 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.487 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.488 03:13:13 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 066951b50a3d4d428e0a1f6be2c9657d 00:25:18.488 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.488 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.488 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.488 03:13:13 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:18.488 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.488 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.488 [2024-07-14 03:13:13.583674] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:18.488 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.488 03:13:13 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:18.488 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.488 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 nvme0n1 00:25:18.745 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.745 03:13:13 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:18.745 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.745 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 [ 00:25:18.745 { 00:25:18.745 "name": "nvme0n1", 00:25:18.745 "aliases": [ 00:25:18.745 "066951b5-0a3d-4d42-8e0a-1f6be2c9657d" 00:25:18.745 ], 00:25:18.745 "product_name": "NVMe disk", 00:25:18.745 "block_size": 512, 00:25:18.745 "num_blocks": 2097152, 00:25:18.745 "uuid": "066951b5-0a3d-4d42-8e0a-1f6be2c9657d", 00:25:18.745 "assigned_rate_limits": { 00:25:18.745 "rw_ios_per_sec": 0, 00:25:18.745 "rw_mbytes_per_sec": 0, 00:25:18.745 "r_mbytes_per_sec": 0, 00:25:18.745 "w_mbytes_per_sec": 0 00:25:18.745 }, 00:25:18.745 "claimed": false, 00:25:18.745 "zoned": false, 00:25:18.745 "supported_io_types": { 00:25:18.745 "read": true, 00:25:18.745 "write": true, 00:25:18.745 "unmap": false, 00:25:18.745 "write_zeroes": true, 00:25:18.745 "flush": true, 00:25:18.745 "reset": true, 00:25:18.745 "compare": true, 00:25:18.745 "compare_and_write": true, 00:25:18.745 "abort": true, 00:25:18.745 "nvme_admin": true, 00:25:18.745 "nvme_io": true 00:25:18.745 }, 00:25:18.745 "driver_specific": { 00:25:18.745 "nvme": [ 00:25:18.745 { 00:25:18.745 "trid": { 00:25:18.745 "trtype": "TCP", 00:25:18.745 "adrfam": "IPv4", 00:25:18.745 "traddr": "10.0.0.2", 00:25:18.745 "trsvcid": "4420", 00:25:18.745 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:18.745 }, 00:25:18.745 "ctrlr_data": { 00:25:18.745 "cntlid": 1, 00:25:18.745 "vendor_id": "0x8086", 00:25:18.745 "model_number": "SPDK bdev Controller", 00:25:18.745 "serial_number": "00000000000000000000", 00:25:18.745 "firmware_revision": "24.01.1", 00:25:18.745 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.745 "oacs": { 00:25:18.745 "security": 0, 00:25:18.745 "format": 0, 00:25:18.745 "firmware": 0, 00:25:18.745 "ns_manage": 0 00:25:18.745 }, 00:25:18.745 "multi_ctrlr": true, 00:25:18.745 "ana_reporting": false 00:25:18.745 }, 00:25:18.745 "vs": { 00:25:18.745 "nvme_version": "1.3" 00:25:18.745 }, 00:25:18.745 "ns_data": { 00:25:18.745 "id": 1, 00:25:18.745 "can_share": true 00:25:18.745 } 00:25:18.745 } 00:25:18.745 ], 00:25:18.745 "mp_policy": "active_passive" 00:25:18.745 } 00:25:18.745 } 00:25:18.745 ] 00:25:18.745 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.745 03:13:13 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:18.745 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.745 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 [2024-07-14 03:13:13.832419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:18.745 [2024-07-14 03:13:13.832507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a7b480 (9): Bad file descriptor 00:25:18.745 [2024-07-14 03:13:13.965023] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:18.745 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.745 03:13:13 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:18.745 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.745 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.745 [ 00:25:18.745 { 00:25:18.745 "name": "nvme0n1", 00:25:18.745 "aliases": [ 00:25:18.745 "066951b5-0a3d-4d42-8e0a-1f6be2c9657d" 00:25:18.745 ], 00:25:18.745 "product_name": "NVMe disk", 00:25:18.745 "block_size": 512, 00:25:18.745 "num_blocks": 2097152, 00:25:18.745 "uuid": "066951b5-0a3d-4d42-8e0a-1f6be2c9657d", 00:25:18.745 "assigned_rate_limits": { 00:25:18.745 "rw_ios_per_sec": 0, 00:25:18.745 "rw_mbytes_per_sec": 0, 00:25:18.745 "r_mbytes_per_sec": 0, 00:25:18.745 "w_mbytes_per_sec": 0 00:25:18.745 }, 00:25:18.745 "claimed": false, 00:25:18.745 "zoned": false, 00:25:18.745 "supported_io_types": { 00:25:18.745 "read": true, 00:25:18.745 "write": true, 00:25:18.745 "unmap": false, 00:25:18.745 "write_zeroes": true, 00:25:18.745 "flush": true, 00:25:18.745 "reset": true, 00:25:18.745 "compare": true, 00:25:18.745 "compare_and_write": true, 00:25:18.745 "abort": true, 00:25:18.745 "nvme_admin": true, 00:25:18.745 "nvme_io": true 00:25:18.745 }, 00:25:18.745 "driver_specific": { 00:25:18.745 "nvme": [ 00:25:18.745 { 00:25:18.745 "trid": { 00:25:18.745 "trtype": "TCP", 00:25:18.745 "adrfam": "IPv4", 00:25:18.745 "traddr": "10.0.0.2", 00:25:18.745 "trsvcid": "4420", 00:25:18.745 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:18.746 }, 00:25:18.746 "ctrlr_data": { 00:25:18.746 "cntlid": 2, 00:25:18.746 "vendor_id": "0x8086", 00:25:18.746 "model_number": "SPDK bdev Controller", 00:25:18.746 "serial_number": "00000000000000000000", 00:25:18.746 "firmware_revision": "24.01.1", 00:25:18.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:18.746 "oacs": { 00:25:18.746 "security": 0, 00:25:18.746 "format": 0, 00:25:18.746 "firmware": 0, 00:25:18.746 "ns_manage": 0 00:25:18.746 }, 00:25:18.746 "multi_ctrlr": true, 00:25:18.746 "ana_reporting": false 00:25:18.746 }, 00:25:18.746 "vs": { 00:25:18.746 "nvme_version": "1.3" 00:25:18.746 }, 00:25:18.746 "ns_data": { 00:25:18.746 "id": 1, 00:25:18.746 "can_share": true 00:25:18.746 } 00:25:18.746 } 00:25:18.746 ], 00:25:18.746 "mp_policy": "active_passive" 00:25:18.746 } 00:25:18.746 } 00:25:18.746 ] 00:25:18.746 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.746 03:13:13 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.746 03:13:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.746 03:13:13 -- common/autotest_common.sh@10 -- # set +x 00:25:18.746 03:13:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.746 03:13:13 -- host/async_init.sh@53 -- # mktemp 00:25:19.004 03:13:13 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UcAbaPW9IA 00:25:19.004 03:13:14 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:19.004 03:13:14 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UcAbaPW9IA 00:25:19.004 03:13:14 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 [2024-07-14 03:13:14.017063] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:19.004 [2024-07-14 03:13:14.017189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcAbaPW9IA 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.UcAbaPW9IA 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 [2024-07-14 03:13:14.033096] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:19.004 nvme0n1 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 [ 00:25:19.004 { 00:25:19.004 "name": "nvme0n1", 00:25:19.004 "aliases": [ 00:25:19.004 "066951b5-0a3d-4d42-8e0a-1f6be2c9657d" 00:25:19.004 ], 00:25:19.004 "product_name": "NVMe disk", 00:25:19.004 "block_size": 512, 00:25:19.004 "num_blocks": 2097152, 00:25:19.004 "uuid": "066951b5-0a3d-4d42-8e0a-1f6be2c9657d", 00:25:19.004 "assigned_rate_limits": { 00:25:19.004 "rw_ios_per_sec": 0, 00:25:19.004 "rw_mbytes_per_sec": 0, 00:25:19.004 "r_mbytes_per_sec": 0, 00:25:19.004 "w_mbytes_per_sec": 0 00:25:19.004 }, 00:25:19.004 "claimed": false, 00:25:19.004 "zoned": false, 00:25:19.004 "supported_io_types": { 00:25:19.004 "read": true, 00:25:19.004 "write": true, 00:25:19.004 "unmap": false, 00:25:19.004 "write_zeroes": true, 00:25:19.004 "flush": true, 00:25:19.004 "reset": true, 00:25:19.004 "compare": true, 00:25:19.004 "compare_and_write": true, 00:25:19.004 "abort": true, 00:25:19.004 "nvme_admin": true, 00:25:19.004 "nvme_io": true 00:25:19.004 }, 00:25:19.004 "driver_specific": { 00:25:19.004 "nvme": [ 00:25:19.004 { 00:25:19.004 "trid": { 00:25:19.004 "trtype": "TCP", 00:25:19.004 "adrfam": "IPv4", 00:25:19.004 "traddr": "10.0.0.2", 00:25:19.004 "trsvcid": "4421", 00:25:19.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:19.004 }, 00:25:19.004 "ctrlr_data": { 00:25:19.004 "cntlid": 3, 00:25:19.004 "vendor_id": "0x8086", 00:25:19.004 "model_number": "SPDK bdev Controller", 00:25:19.004 "serial_number": "00000000000000000000", 00:25:19.004 "firmware_revision": "24.01.1", 00:25:19.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:19.004 "oacs": { 00:25:19.004 "security": 0, 00:25:19.004 "format": 0, 00:25:19.004 "firmware": 0, 00:25:19.004 "ns_manage": 0 00:25:19.004 }, 00:25:19.004 "multi_ctrlr": true, 00:25:19.004 "ana_reporting": false 00:25:19.004 }, 00:25:19.004 "vs": { 00:25:19.004 "nvme_version": "1.3" 00:25:19.004 }, 00:25:19.004 "ns_data": { 00:25:19.004 "id": 1, 00:25:19.004 "can_share": true 00:25:19.004 } 00:25:19.004 } 00:25:19.004 ], 00:25:19.004 "mp_policy": "active_passive" 00:25:19.004 } 00:25:19.004 } 00:25:19.004 ] 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.004 03:13:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:19.004 03:13:14 -- common/autotest_common.sh@10 -- # set +x 00:25:19.004 03:13:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:19.004 03:13:14 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.UcAbaPW9IA 00:25:19.004 03:13:14 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:19.004 03:13:14 -- host/async_init.sh@78 -- # nvmftestfini 00:25:19.004 03:13:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:19.004 03:13:14 -- nvmf/common.sh@116 -- # sync 00:25:19.004 03:13:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:19.005 03:13:14 -- nvmf/common.sh@119 -- # set +e 00:25:19.005 03:13:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:19.005 03:13:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:19.005 rmmod nvme_tcp 00:25:19.005 rmmod nvme_fabrics 00:25:19.005 rmmod nvme_keyring 00:25:19.005 03:13:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:19.005 03:13:14 -- nvmf/common.sh@123 -- # set -e 00:25:19.005 03:13:14 -- nvmf/common.sh@124 -- # return 0 00:25:19.005 03:13:14 -- nvmf/common.sh@477 -- # '[' -n 2084703 ']' 00:25:19.005 03:13:14 -- nvmf/common.sh@478 -- # killprocess 2084703 00:25:19.005 03:13:14 -- common/autotest_common.sh@926 -- # '[' -z 2084703 ']' 00:25:19.005 03:13:14 -- common/autotest_common.sh@930 -- # kill -0 2084703 00:25:19.005 03:13:14 -- common/autotest_common.sh@931 -- # uname 00:25:19.005 03:13:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:19.005 03:13:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2084703 00:25:19.005 03:13:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:19.005 03:13:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:19.005 03:13:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2084703' 00:25:19.005 killing process with pid 2084703 00:25:19.005 03:13:14 -- common/autotest_common.sh@945 -- # kill 2084703 00:25:19.005 03:13:14 -- common/autotest_common.sh@950 -- # wait 2084703 00:25:19.263 03:13:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:19.263 03:13:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:19.263 03:13:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:19.263 03:13:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.263 03:13:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:19.263 03:13:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.263 03:13:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.263 03:13:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.795 03:13:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:21.795 00:25:21.795 real 0m6.176s 00:25:21.795 user 0m2.909s 00:25:21.795 sys 0m1.847s 00:25:21.795 03:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.795 03:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.795 ************************************ 00:25:21.795 END TEST nvmf_async_init 00:25:21.795 ************************************ 00:25:21.795 03:13:16 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:21.795 03:13:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:21.795 03:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.795 03:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.795 ************************************ 00:25:21.795 START TEST dma 00:25:21.795 ************************************ 00:25:21.795 03:13:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:21.795 * Looking for test storage... 00:25:21.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.796 03:13:16 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.796 03:13:16 -- nvmf/common.sh@7 -- # uname -s 00:25:21.796 03:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.796 03:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.796 03:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.796 03:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.796 03:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.796 03:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.796 03:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.796 03:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.796 03:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.796 03:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.796 03:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.796 03:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.796 03:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.796 03:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.796 03:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.796 03:13:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.796 03:13:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.796 03:13:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.796 03:13:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.796 03:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@5 -- # export PATH 00:25:21.796 03:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- nvmf/common.sh@46 -- # : 0 00:25:21.796 03:13:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:21.796 03:13:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:21.796 03:13:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.796 03:13:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.796 03:13:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:21.796 03:13:16 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:21.796 03:13:16 -- host/dma.sh@13 -- # exit 0 00:25:21.796 00:25:21.796 real 0m0.066s 00:25:21.796 user 0m0.034s 00:25:21.796 sys 0m0.038s 00:25:21.796 03:13:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.796 03:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 ************************************ 00:25:21.796 END TEST dma 00:25:21.796 ************************************ 00:25:21.796 03:13:16 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:21.796 03:13:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:21.796 03:13:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.796 03:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:21.796 ************************************ 00:25:21.796 START TEST nvmf_identify 00:25:21.796 ************************************ 00:25:21.796 03:13:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:21.796 * Looking for test storage... 00:25:21.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.796 03:13:16 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.796 03:13:16 -- nvmf/common.sh@7 -- # uname -s 00:25:21.796 03:13:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.796 03:13:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.796 03:13:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.796 03:13:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.796 03:13:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.796 03:13:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.796 03:13:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.796 03:13:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.796 03:13:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.796 03:13:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.796 03:13:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.796 03:13:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.796 03:13:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.796 03:13:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.796 03:13:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.796 03:13:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.796 03:13:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.796 03:13:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.796 03:13:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.796 03:13:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- paths/export.sh@5 -- # export PATH 00:25:21.796 03:13:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.796 03:13:16 -- nvmf/common.sh@46 -- # : 0 00:25:21.796 03:13:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:21.796 03:13:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:21.796 03:13:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.796 03:13:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.796 03:13:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:21.796 03:13:16 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.796 03:13:16 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.796 03:13:16 -- host/identify.sh@14 -- # nvmftestinit 00:25:21.796 03:13:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:21.796 03:13:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.796 03:13:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:21.796 03:13:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:21.796 03:13:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:21.796 03:13:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.796 03:13:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.796 03:13:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.796 03:13:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:21.796 03:13:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:21.796 03:13:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:21.796 03:13:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.694 03:13:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:23.694 03:13:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:23.694 03:13:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:23.694 03:13:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:23.694 03:13:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:23.694 03:13:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:23.694 03:13:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:23.694 03:13:18 -- nvmf/common.sh@294 -- # net_devs=() 00:25:23.694 03:13:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:23.694 03:13:18 -- nvmf/common.sh@295 -- # e810=() 00:25:23.694 03:13:18 -- nvmf/common.sh@295 -- # local -ga e810 00:25:23.694 03:13:18 -- nvmf/common.sh@296 -- # x722=() 00:25:23.695 03:13:18 -- nvmf/common.sh@296 -- # local -ga x722 00:25:23.695 03:13:18 -- nvmf/common.sh@297 -- # mlx=() 00:25:23.695 03:13:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:23.695 03:13:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.695 03:13:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.695 03:13:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:23.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:23.695 03:13:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.695 03:13:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:23.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:23.695 03:13:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.695 03:13:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.695 03:13:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.695 03:13:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:23.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:23.695 03:13:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.695 03:13:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.695 03:13:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.695 03:13:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:23.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:23.695 03:13:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:23.695 03:13:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:23.695 03:13:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.695 03:13:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.695 03:13:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:23.695 03:13:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.695 03:13:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.695 03:13:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:23.695 03:13:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.695 03:13:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.695 03:13:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:23.695 03:13:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:23.695 03:13:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.695 03:13:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.695 03:13:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.695 03:13:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.695 03:13:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:23.695 03:13:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.695 03:13:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.695 03:13:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.695 03:13:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:23.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:25:23.695 00:25:23.695 --- 10.0.0.2 ping statistics --- 00:25:23.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.695 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:23.695 03:13:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:25:23.695 00:25:23.695 --- 10.0.0.1 ping statistics --- 00:25:23.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.695 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:25:23.695 03:13:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.695 03:13:18 -- nvmf/common.sh@410 -- # return 0 00:25:23.695 03:13:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:23.695 03:13:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.695 03:13:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:23.695 03:13:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.695 03:13:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:23.695 03:13:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:23.695 03:13:18 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:23.695 03:13:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:23.695 03:13:18 -- common/autotest_common.sh@10 -- # set +x 00:25:23.695 03:13:18 -- host/identify.sh@19 -- # nvmfpid=2086849 00:25:23.695 03:13:18 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.695 03:13:18 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:23.695 03:13:18 -- host/identify.sh@23 -- # waitforlisten 2086849 00:25:23.695 03:13:18 -- common/autotest_common.sh@819 -- # '[' -z 2086849 ']' 00:25:23.695 03:13:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.695 03:13:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:23.695 03:13:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.695 03:13:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:23.695 03:13:18 -- common/autotest_common.sh@10 -- # set +x 00:25:23.695 [2024-07-14 03:13:18.686708] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:23.695 [2024-07-14 03:13:18.686786] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.695 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.695 [2024-07-14 03:13:18.757544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.695 [2024-07-14 03:13:18.850183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:23.695 [2024-07-14 03:13:18.850361] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.695 [2024-07-14 03:13:18.850381] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.695 [2024-07-14 03:13:18.850396] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.695 [2024-07-14 03:13:18.850492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.695 [2024-07-14 03:13:18.850683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.695 [2024-07-14 03:13:18.850738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.695 [2024-07-14 03:13:18.850740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.630 03:13:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:24.630 03:13:19 -- common/autotest_common.sh@852 -- # return 0 00:25:24.630 03:13:19 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 [2024-07-14 03:13:19.616375] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:24.630 03:13:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 03:13:19 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 Malloc0 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 [2024-07-14 03:13:19.693580] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:24.630 03:13:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.630 03:13:19 -- common/autotest_common.sh@10 -- # set +x 00:25:24.630 [2024-07-14 03:13:19.709348] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:24.630 [ 00:25:24.630 { 00:25:24.630 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:24.630 "subtype": "Discovery", 00:25:24.630 "listen_addresses": [ 00:25:24.630 { 00:25:24.630 "transport": "TCP", 00:25:24.630 "trtype": "TCP", 00:25:24.630 "adrfam": "IPv4", 00:25:24.630 "traddr": "10.0.0.2", 00:25:24.630 "trsvcid": "4420" 00:25:24.630 } 00:25:24.630 ], 00:25:24.630 "allow_any_host": true, 00:25:24.630 "hosts": [] 00:25:24.630 }, 00:25:24.630 { 00:25:24.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:24.630 "subtype": "NVMe", 00:25:24.630 "listen_addresses": [ 00:25:24.630 { 00:25:24.630 "transport": "TCP", 00:25:24.630 "trtype": "TCP", 00:25:24.630 "adrfam": "IPv4", 00:25:24.630 "traddr": "10.0.0.2", 00:25:24.630 "trsvcid": "4420" 00:25:24.630 } 00:25:24.630 ], 00:25:24.630 "allow_any_host": true, 00:25:24.630 "hosts": [], 00:25:24.630 "serial_number": "SPDK00000000000001", 00:25:24.630 "model_number": "SPDK bdev Controller", 00:25:24.630 "max_namespaces": 32, 00:25:24.630 "min_cntlid": 1, 00:25:24.630 "max_cntlid": 65519, 00:25:24.630 "namespaces": [ 00:25:24.630 { 00:25:24.630 "nsid": 1, 00:25:24.630 "bdev_name": "Malloc0", 00:25:24.630 "name": "Malloc0", 00:25:24.630 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:24.630 "eui64": "ABCDEF0123456789", 00:25:24.630 "uuid": "e1946540-36d0-4703-8818-98c96dca0969" 00:25:24.630 } 00:25:24.630 ] 00:25:24.630 } 00:25:24.630 ] 00:25:24.630 03:13:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.630 03:13:19 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:24.630 [2024-07-14 03:13:19.733502] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:24.630 [2024-07-14 03:13:19.733544] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087005 ] 00:25:24.630 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.630 [2024-07-14 03:13:19.768362] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:24.630 [2024-07-14 03:13:19.768418] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:24.630 [2024-07-14 03:13:19.768428] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:24.630 [2024-07-14 03:13:19.768446] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:24.630 [2024-07-14 03:13:19.768459] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:24.630 [2024-07-14 03:13:19.768785] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:24.630 [2024-07-14 03:13:19.768850] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x109feb0 0 00:25:24.630 [2024-07-14 03:13:19.774889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:24.630 [2024-07-14 03:13:19.774910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:24.630 [2024-07-14 03:13:19.774919] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:24.630 [2024-07-14 03:13:19.774925] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:24.630 [2024-07-14 03:13:19.774982] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.630 [2024-07-14 03:13:19.774995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.630 [2024-07-14 03:13:19.775003] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.630 [2024-07-14 03:13:19.775020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:24.630 [2024-07-14 03:13:19.775047] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.630 [2024-07-14 03:13:19.782879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.630 [2024-07-14 03:13:19.782897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.630 [2024-07-14 03:13:19.782904] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.630 [2024-07-14 03:13:19.782912] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.630 [2024-07-14 03:13:19.782940] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:24.630 [2024-07-14 03:13:19.782950] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:24.630 [2024-07-14 03:13:19.782960] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:24.630 [2024-07-14 03:13:19.782978] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.630 [2024-07-14 03:13:19.782987] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.782993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.783005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.783028] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.783212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.783224] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.783231] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.783253] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:24.631 [2024-07-14 03:13:19.783266] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:24.631 [2024-07-14 03:13:19.783279] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783286] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.783302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.783323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.783469] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.783480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.783487] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.783503] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:24.631 [2024-07-14 03:13:19.783516] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.783528] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783535] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783541] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.783551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.783571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.783721] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.783735] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.783742] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.783758] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.783774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.783789] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.783799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.783819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.784000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.784016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.784023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.784039] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:24.631 [2024-07-14 03:13:19.784053] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.784067] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.784193] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:24.631 [2024-07-14 03:13:19.784201] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.784215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784229] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.784239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.784260] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.784404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.784415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.784422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.784438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:24.631 [2024-07-14 03:13:19.784453] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784468] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.784478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.784497] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.784637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.784651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.784658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784664] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.784674] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:24.631 [2024-07-14 03:13:19.784682] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:24.631 [2024-07-14 03:13:19.784695] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:24.631 [2024-07-14 03:13:19.784714] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:24.631 [2024-07-14 03:13:19.784728] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.784742] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.784752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.631 [2024-07-14 03:13:19.784776] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.784986] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.631 [2024-07-14 03:13:19.785003] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.631 [2024-07-14 03:13:19.785010] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785017] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109feb0): datao=0, datal=4096, cccid=0 00:25:24.631 [2024-07-14 03:13:19.785025] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f8f80) on tqpair(0x109feb0): expected_datao=0, payload_size=4096 00:25:24.631 [2024-07-14 03:13:19.785037] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785045] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785084] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.785095] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.785101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.785121] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:24.631 [2024-07-14 03:13:19.785131] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:24.631 [2024-07-14 03:13:19.785138] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:24.631 [2024-07-14 03:13:19.785147] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:24.631 [2024-07-14 03:13:19.785154] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:24.631 [2024-07-14 03:13:19.785163] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:24.631 [2024-07-14 03:13:19.785198] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:24.631 [2024-07-14 03:13:19.785211] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785225] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.785235] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.631 [2024-07-14 03:13:19.785256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.631 [2024-07-14 03:13:19.785409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.631 [2024-07-14 03:13:19.785420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.631 [2024-07-14 03:13:19.785427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785433] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f8f80) on tqpair=0x109feb0 00:25:24.631 [2024-07-14 03:13:19.785446] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785453] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785459] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x109feb0) 00:25:24.631 [2024-07-14 03:13:19.785468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.631 [2024-07-14 03:13:19.785478] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.631 [2024-07-14 03:13:19.785485] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785495] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.785504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.632 [2024-07-14 03:13:19.785514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785520] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785526] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.785535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.632 [2024-07-14 03:13:19.785544] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785550] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785557] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.785565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.632 [2024-07-14 03:13:19.785573] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:24.632 [2024-07-14 03:13:19.785591] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:24.632 [2024-07-14 03:13:19.785603] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785610] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785616] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.785626] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.632 [2024-07-14 03:13:19.785647] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f8f80, cid 0, qid 0 00:25:24.632 [2024-07-14 03:13:19.785658] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f90e0, cid 1, qid 0 00:25:24.632 [2024-07-14 03:13:19.785665] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9240, cid 2, qid 0 00:25:24.632 [2024-07-14 03:13:19.785673] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.632 [2024-07-14 03:13:19.785681] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9500, cid 4, qid 0 00:25:24.632 [2024-07-14 03:13:19.785843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.785879] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.785887] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9500) on tqpair=0x109feb0 00:25:24.632 [2024-07-14 03:13:19.785904] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:24.632 [2024-07-14 03:13:19.785913] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:24.632 [2024-07-14 03:13:19.785931] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785940] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.785946] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.785957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.632 [2024-07-14 03:13:19.785977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9500, cid 4, qid 0 00:25:24.632 [2024-07-14 03:13:19.786142] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.632 [2024-07-14 03:13:19.786154] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.632 [2024-07-14 03:13:19.786161] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786168] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109feb0): datao=0, datal=4096, cccid=4 00:25:24.632 [2024-07-14 03:13:19.786175] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f9500) on tqpair(0x109feb0): expected_datao=0, payload_size=4096 00:25:24.632 [2024-07-14 03:13:19.786186] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786194] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786252] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.786263] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.786270] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9500) on tqpair=0x109feb0 00:25:24.632 [2024-07-14 03:13:19.786295] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:24.632 [2024-07-14 03:13:19.786331] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.786359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.632 [2024-07-14 03:13:19.786371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786378] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.786393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.632 [2024-07-14 03:13:19.786419] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9500, cid 4, qid 0 00:25:24.632 [2024-07-14 03:13:19.786431] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9660, cid 5, qid 0 00:25:24.632 [2024-07-14 03:13:19.786649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.632 [2024-07-14 03:13:19.786661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.632 [2024-07-14 03:13:19.786668] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786674] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109feb0): datao=0, datal=1024, cccid=4 00:25:24.632 [2024-07-14 03:13:19.786682] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f9500) on tqpair(0x109feb0): expected_datao=0, payload_size=1024 00:25:24.632 [2024-07-14 03:13:19.786692] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786699] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786707] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.786716] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.786722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.786728] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9660) on tqpair=0x109feb0 00:25:24.632 [2024-07-14 03:13:19.828883] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.828902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.828909] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.828916] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9500) on tqpair=0x109feb0 00:25:24.632 [2024-07-14 03:13:19.828942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.828953] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.828959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.828970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.632 [2024-07-14 03:13:19.829014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9500, cid 4, qid 0 00:25:24.632 [2024-07-14 03:13:19.829214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.632 [2024-07-14 03:13:19.829230] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.632 [2024-07-14 03:13:19.829237] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829243] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109feb0): datao=0, datal=3072, cccid=4 00:25:24.632 [2024-07-14 03:13:19.829251] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f9500) on tqpair(0x109feb0): expected_datao=0, payload_size=3072 00:25:24.632 [2024-07-14 03:13:19.829261] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829269] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.829325] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.829331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829338] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9500) on tqpair=0x109feb0 00:25:24.632 [2024-07-14 03:13:19.829352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829361] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829367] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x109feb0) 00:25:24.632 [2024-07-14 03:13:19.829377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.632 [2024-07-14 03:13:19.829404] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f9500, cid 4, qid 0 00:25:24.632 [2024-07-14 03:13:19.829559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.632 [2024-07-14 03:13:19.829571] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.632 [2024-07-14 03:13:19.829577] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829584] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x109feb0): datao=0, datal=8, cccid=4 00:25:24.632 [2024-07-14 03:13:19.829591] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x10f9500) on tqpair(0x109feb0): expected_datao=0, payload_size=8 00:25:24.632 [2024-07-14 03:13:19.829601] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.829609] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.871884] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.632 [2024-07-14 03:13:19.871903] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.632 [2024-07-14 03:13:19.871926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.632 [2024-07-14 03:13:19.871933] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f9500) on tqpair=0x109feb0 00:25:24.632 ===================================================== 00:25:24.632 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:24.632 ===================================================== 00:25:24.632 Controller Capabilities/Features 00:25:24.632 ================================ 00:25:24.632 Vendor ID: 0000 00:25:24.632 Subsystem Vendor ID: 0000 00:25:24.632 Serial Number: .................... 00:25:24.632 Model Number: ........................................ 00:25:24.632 Firmware Version: 24.01.1 00:25:24.632 Recommended Arb Burst: 0 00:25:24.632 IEEE OUI Identifier: 00 00 00 00:25:24.632 Multi-path I/O 00:25:24.632 May have multiple subsystem ports: No 00:25:24.633 May have multiple controllers: No 00:25:24.633 Associated with SR-IOV VF: No 00:25:24.633 Max Data Transfer Size: 131072 00:25:24.633 Max Number of Namespaces: 0 00:25:24.633 Max Number of I/O Queues: 1024 00:25:24.633 NVMe Specification Version (VS): 1.3 00:25:24.633 NVMe Specification Version (Identify): 1.3 00:25:24.633 Maximum Queue Entries: 128 00:25:24.633 Contiguous Queues Required: Yes 00:25:24.633 Arbitration Mechanisms Supported 00:25:24.633 Weighted Round Robin: Not Supported 00:25:24.633 Vendor Specific: Not Supported 00:25:24.633 Reset Timeout: 15000 ms 00:25:24.633 Doorbell Stride: 4 bytes 00:25:24.633 NVM Subsystem Reset: Not Supported 00:25:24.633 Command Sets Supported 00:25:24.633 NVM Command Set: Supported 00:25:24.633 Boot Partition: Not Supported 00:25:24.633 Memory Page Size Minimum: 4096 bytes 00:25:24.633 Memory Page Size Maximum: 4096 bytes 00:25:24.633 Persistent Memory Region: Not Supported 00:25:24.633 Optional Asynchronous Events Supported 00:25:24.633 Namespace Attribute Notices: Not Supported 00:25:24.633 Firmware Activation Notices: Not Supported 00:25:24.633 ANA Change Notices: Not Supported 00:25:24.633 PLE Aggregate Log Change Notices: Not Supported 00:25:24.633 LBA Status Info Alert Notices: Not Supported 00:25:24.633 EGE Aggregate Log Change Notices: Not Supported 00:25:24.633 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.633 Zone Descriptor Change Notices: Not Supported 00:25:24.633 Discovery Log Change Notices: Supported 00:25:24.633 Controller Attributes 00:25:24.633 128-bit Host Identifier: Not Supported 00:25:24.633 Non-Operational Permissive Mode: Not Supported 00:25:24.633 NVM Sets: Not Supported 00:25:24.633 Read Recovery Levels: Not Supported 00:25:24.633 Endurance Groups: Not Supported 00:25:24.633 Predictable Latency Mode: Not Supported 00:25:24.633 Traffic Based Keep ALive: Not Supported 00:25:24.633 Namespace Granularity: Not Supported 00:25:24.633 SQ Associations: Not Supported 00:25:24.633 UUID List: Not Supported 00:25:24.633 Multi-Domain Subsystem: Not Supported 00:25:24.633 Fixed Capacity Management: Not Supported 00:25:24.633 Variable Capacity Management: Not Supported 00:25:24.633 Delete Endurance Group: Not Supported 00:25:24.633 Delete NVM Set: Not Supported 00:25:24.633 Extended LBA Formats Supported: Not Supported 00:25:24.633 Flexible Data Placement Supported: Not Supported 00:25:24.633 00:25:24.633 Controller Memory Buffer Support 00:25:24.633 ================================ 00:25:24.633 Supported: No 00:25:24.633 00:25:24.633 Persistent Memory Region Support 00:25:24.633 ================================ 00:25:24.633 Supported: No 00:25:24.633 00:25:24.633 Admin Command Set Attributes 00:25:24.633 ============================ 00:25:24.633 Security Send/Receive: Not Supported 00:25:24.633 Format NVM: Not Supported 00:25:24.633 Firmware Activate/Download: Not Supported 00:25:24.633 Namespace Management: Not Supported 00:25:24.633 Device Self-Test: Not Supported 00:25:24.633 Directives: Not Supported 00:25:24.633 NVMe-MI: Not Supported 00:25:24.633 Virtualization Management: Not Supported 00:25:24.633 Doorbell Buffer Config: Not Supported 00:25:24.633 Get LBA Status Capability: Not Supported 00:25:24.633 Command & Feature Lockdown Capability: Not Supported 00:25:24.633 Abort Command Limit: 1 00:25:24.633 Async Event Request Limit: 4 00:25:24.633 Number of Firmware Slots: N/A 00:25:24.633 Firmware Slot 1 Read-Only: N/A 00:25:24.633 Firmware Activation Without Reset: N/A 00:25:24.633 Multiple Update Detection Support: N/A 00:25:24.633 Firmware Update Granularity: No Information Provided 00:25:24.633 Per-Namespace SMART Log: No 00:25:24.633 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.633 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:24.633 Command Effects Log Page: Not Supported 00:25:24.633 Get Log Page Extended Data: Supported 00:25:24.633 Telemetry Log Pages: Not Supported 00:25:24.633 Persistent Event Log Pages: Not Supported 00:25:24.633 Supported Log Pages Log Page: May Support 00:25:24.633 Commands Supported & Effects Log Page: Not Supported 00:25:24.633 Feature Identifiers & Effects Log Page:May Support 00:25:24.633 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.633 Data Area 4 for Telemetry Log: Not Supported 00:25:24.633 Error Log Page Entries Supported: 128 00:25:24.633 Keep Alive: Not Supported 00:25:24.633 00:25:24.633 NVM Command Set Attributes 00:25:24.633 ========================== 00:25:24.633 Submission Queue Entry Size 00:25:24.633 Max: 1 00:25:24.633 Min: 1 00:25:24.633 Completion Queue Entry Size 00:25:24.633 Max: 1 00:25:24.633 Min: 1 00:25:24.633 Number of Namespaces: 0 00:25:24.633 Compare Command: Not Supported 00:25:24.633 Write Uncorrectable Command: Not Supported 00:25:24.633 Dataset Management Command: Not Supported 00:25:24.633 Write Zeroes Command: Not Supported 00:25:24.633 Set Features Save Field: Not Supported 00:25:24.633 Reservations: Not Supported 00:25:24.633 Timestamp: Not Supported 00:25:24.633 Copy: Not Supported 00:25:24.633 Volatile Write Cache: Not Present 00:25:24.633 Atomic Write Unit (Normal): 1 00:25:24.633 Atomic Write Unit (PFail): 1 00:25:24.633 Atomic Compare & Write Unit: 1 00:25:24.633 Fused Compare & Write: Supported 00:25:24.633 Scatter-Gather List 00:25:24.633 SGL Command Set: Supported 00:25:24.633 SGL Keyed: Supported 00:25:24.633 SGL Bit Bucket Descriptor: Not Supported 00:25:24.633 SGL Metadata Pointer: Not Supported 00:25:24.633 Oversized SGL: Not Supported 00:25:24.633 SGL Metadata Address: Not Supported 00:25:24.633 SGL Offset: Supported 00:25:24.633 Transport SGL Data Block: Not Supported 00:25:24.633 Replay Protected Memory Block: Not Supported 00:25:24.633 00:25:24.633 Firmware Slot Information 00:25:24.633 ========================= 00:25:24.633 Active slot: 0 00:25:24.633 00:25:24.633 00:25:24.633 Error Log 00:25:24.633 ========= 00:25:24.633 00:25:24.633 Active Namespaces 00:25:24.633 ================= 00:25:24.633 Discovery Log Page 00:25:24.633 ================== 00:25:24.633 Generation Counter: 2 00:25:24.633 Number of Records: 2 00:25:24.633 Record Format: 0 00:25:24.633 00:25:24.633 Discovery Log Entry 0 00:25:24.633 ---------------------- 00:25:24.633 Transport Type: 3 (TCP) 00:25:24.633 Address Family: 1 (IPv4) 00:25:24.633 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:24.633 Entry Flags: 00:25:24.633 Duplicate Returned Information: 1 00:25:24.633 Explicit Persistent Connection Support for Discovery: 1 00:25:24.633 Transport Requirements: 00:25:24.633 Secure Channel: Not Required 00:25:24.633 Port ID: 0 (0x0000) 00:25:24.633 Controller ID: 65535 (0xffff) 00:25:24.633 Admin Max SQ Size: 128 00:25:24.633 Transport Service Identifier: 4420 00:25:24.633 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:24.633 Transport Address: 10.0.0.2 00:25:24.633 Discovery Log Entry 1 00:25:24.633 ---------------------- 00:25:24.633 Transport Type: 3 (TCP) 00:25:24.633 Address Family: 1 (IPv4) 00:25:24.633 Subsystem Type: 2 (NVM Subsystem) 00:25:24.633 Entry Flags: 00:25:24.633 Duplicate Returned Information: 0 00:25:24.633 Explicit Persistent Connection Support for Discovery: 0 00:25:24.633 Transport Requirements: 00:25:24.633 Secure Channel: Not Required 00:25:24.633 Port ID: 0 (0x0000) 00:25:24.633 Controller ID: 65535 (0xffff) 00:25:24.633 Admin Max SQ Size: 128 00:25:24.633 Transport Service Identifier: 4420 00:25:24.633 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:24.633 Transport Address: 10.0.0.2 [2024-07-14 03:13:19.872051] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:24.633 [2024-07-14 03:13:19.872076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.633 [2024-07-14 03:13:19.872088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.633 [2024-07-14 03:13:19.872102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.633 [2024-07-14 03:13:19.872112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.633 [2024-07-14 03:13:19.872126] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.633 [2024-07-14 03:13:19.872134] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.633 [2024-07-14 03:13:19.872141] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.633 [2024-07-14 03:13:19.872153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-07-14 03:13:19.872192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.633 [2024-07-14 03:13:19.872352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.633 [2024-07-14 03:13:19.872367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.633 [2024-07-14 03:13:19.872374] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.633 [2024-07-14 03:13:19.872381] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.633 [2024-07-14 03:13:19.872394] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.633 [2024-07-14 03:13:19.872402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.633 [2024-07-14 03:13:19.872408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.872419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.872444] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.872605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.872617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.872624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.872640] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:24.634 [2024-07-14 03:13:19.872649] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:24.634 [2024-07-14 03:13:19.872664] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872679] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.872689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.872709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.872863] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.872888] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.872895] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872902] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.872922] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872932] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.872938] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.872949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.872975] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.873113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.873126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.873133] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873140] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.873157] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.873199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.873219] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.873361] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.873376] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.873383] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873389] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.873407] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873416] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.873433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.873452] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.873583] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.873595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.873602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873608] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.873624] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873640] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.873650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.873669] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.873829] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.873844] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.873874] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873882] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.873901] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873922] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.873929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.873939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.873960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.874105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.874120] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.874128] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.874167] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874177] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874183] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.874194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.874214] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.874357] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.874372] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.874379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874385] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.874402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874418] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.634 [2024-07-14 03:13:19.874428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.634 [2024-07-14 03:13:19.874448] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.634 [2024-07-14 03:13:19.874581] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.634 [2024-07-14 03:13:19.874593] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.634 [2024-07-14 03:13:19.874600] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874607] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.634 [2024-07-14 03:13:19.874623] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874632] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.634 [2024-07-14 03:13:19.874639] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.874649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.874668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.874809] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.874823] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.874830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.874837] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.874854] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.874863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.874893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.874904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.874926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.875068] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.875088] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.875096] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875103] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.875121] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.875148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.875183] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.875321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.875332] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.875339] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875346] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.875363] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.875389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.875408] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.875544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.875555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.875562] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.875585] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875594] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875601] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.875611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.875630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.875784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.875799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.875806] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875813] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.875830] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875839] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.875861] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x109feb0) 00:25:24.635 [2024-07-14 03:13:19.879888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.635 [2024-07-14 03:13:19.879916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x10f93a0, cid 3, qid 0 00:25:24.635 [2024-07-14 03:13:19.880098] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.635 [2024-07-14 03:13:19.880111] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.635 [2024-07-14 03:13:19.880123] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.635 [2024-07-14 03:13:19.880131] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x10f93a0) on tqpair=0x109feb0 00:25:24.635 [2024-07-14 03:13:19.880146] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:25:24.896 00:25:24.896 03:13:19 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:24.896 [2024-07-14 03:13:19.912514] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:24.896 [2024-07-14 03:13:19.912559] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087015 ] 00:25:24.896 EAL: No free 2048 kB hugepages reported on node 1 00:25:24.896 [2024-07-14 03:13:19.945637] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:24.896 [2024-07-14 03:13:19.945685] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:24.896 [2024-07-14 03:13:19.945695] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:24.896 [2024-07-14 03:13:19.945710] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:24.896 [2024-07-14 03:13:19.945721] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:24.896 [2024-07-14 03:13:19.948900] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:24.896 [2024-07-14 03:13:19.948951] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xdd0eb0 0 00:25:24.896 [2024-07-14 03:13:19.956201] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:24.896 [2024-07-14 03:13:19.956220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:24.896 [2024-07-14 03:13:19.956228] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:24.896 [2024-07-14 03:13:19.956249] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:24.896 [2024-07-14 03:13:19.956287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.896 [2024-07-14 03:13:19.956298] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.896 [2024-07-14 03:13:19.956305] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.896 [2024-07-14 03:13:19.956319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:24.896 [2024-07-14 03:13:19.956344] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.896 [2024-07-14 03:13:19.962879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.896 [2024-07-14 03:13:19.962905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.896 [2024-07-14 03:13:19.962912] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.896 [2024-07-14 03:13:19.962920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.896 [2024-07-14 03:13:19.962934] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:24.896 [2024-07-14 03:13:19.962944] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:24.896 [2024-07-14 03:13:19.962953] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:24.896 [2024-07-14 03:13:19.962970] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.896 [2024-07-14 03:13:19.962982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.896 [2024-07-14 03:13:19.962990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.963001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.963024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.963202] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.963214] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.963220] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.963235] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:24.897 [2024-07-14 03:13:19.963248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:24.897 [2024-07-14 03:13:19.963260] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963267] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.963284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.963305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.963454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.963469] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.963476] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963483] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.963491] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:24.897 [2024-07-14 03:13:19.963505] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.963518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963525] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963531] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.963542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.963562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.963720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.963732] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.963739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.963754] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.963770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.963785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.963795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.963820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.963977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.963992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.963999] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964006] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.964013] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:24.897 [2024-07-14 03:13:19.964022] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.964035] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.964145] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:24.897 [2024-07-14 03:13:19.964152] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.964179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964187] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964193] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.964203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.964224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.964401] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.964417] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.964423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964430] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.964438] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:24.897 [2024-07-14 03:13:19.964455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964464] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.964481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.964502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.964660] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:19.964672] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:19.964678] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964685] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:19.964693] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:24.897 [2024-07-14 03:13:19.964701] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:24.897 [2024-07-14 03:13:19.964714] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:24.897 [2024-07-14 03:13:19.964734] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:24.897 [2024-07-14 03:13:19.964748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.964762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:19.964773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.897 [2024-07-14 03:13:19.964810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:19.965071] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.897 [2024-07-14 03:13:19.965087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.897 [2024-07-14 03:13:19.965094] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.965100] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=4096, cccid=0 00:25:24.897 [2024-07-14 03:13:19.965108] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe29f80) on tqpair(0xdd0eb0): expected_datao=0, payload_size=4096 00:25:24.897 [2024-07-14 03:13:19.965140] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:19.965149] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006022] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:20.006041] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:20.006049] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006056] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:20.006067] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:24.897 [2024-07-14 03:13:20.006076] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:24.897 [2024-07-14 03:13:20.006083] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:24.897 [2024-07-14 03:13:20.006090] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:24.897 [2024-07-14 03:13:20.006097] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:24.897 [2024-07-14 03:13:20.006106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:24.897 [2024-07-14 03:13:20.006125] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:24.897 [2024-07-14 03:13:20.006138] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:20.006169] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.897 [2024-07-14 03:13:20.006192] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.897 [2024-07-14 03:13:20.006352] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.897 [2024-07-14 03:13:20.006364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.897 [2024-07-14 03:13:20.006371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006378] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe29f80) on tqpair=0xdd0eb0 00:25:24.897 [2024-07-14 03:13:20.006388] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006399] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:20.006416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.897 [2024-07-14 03:13:20.006426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006433] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.897 [2024-07-14 03:13:20.006439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xdd0eb0) 00:25:24.897 [2024-07-14 03:13:20.006448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.897 [2024-07-14 03:13:20.006458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006465] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006471] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.006480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.898 [2024-07-14 03:13:20.006489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006495] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006502] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.006510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.898 [2024-07-14 03:13:20.006519] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.006537] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.006549] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006556] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006563] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.006573] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.898 [2024-07-14 03:13:20.006596] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe29f80, cid 0, qid 0 00:25:24.898 [2024-07-14 03:13:20.006607] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a0e0, cid 1, qid 0 00:25:24.898 [2024-07-14 03:13:20.006614] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a240, cid 2, qid 0 00:25:24.898 [2024-07-14 03:13:20.006622] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.898 [2024-07-14 03:13:20.006629] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.898 [2024-07-14 03:13:20.006832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.898 [2024-07-14 03:13:20.006848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.898 [2024-07-14 03:13:20.006855] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006861] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.898 [2024-07-14 03:13:20.006879] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:24.898 [2024-07-14 03:13:20.006889] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.006903] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.006928] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.006940] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006947] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.006954] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.006964] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:24.898 [2024-07-14 03:13:20.006985] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.898 [2024-07-14 03:13:20.007143] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.898 [2024-07-14 03:13:20.007165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.898 [2024-07-14 03:13:20.007171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007178] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.898 [2024-07-14 03:13:20.007242] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.007260] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.007274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007281] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007287] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.007298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.898 [2024-07-14 03:13:20.007318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.898 [2024-07-14 03:13:20.007503] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.898 [2024-07-14 03:13:20.007515] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.898 [2024-07-14 03:13:20.007521] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007528] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=4096, cccid=4 00:25:24.898 [2024-07-14 03:13:20.007535] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a500) on tqpair(0xdd0eb0): expected_datao=0, payload_size=4096 00:25:24.898 [2024-07-14 03:13:20.007546] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007554] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007611] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.898 [2024-07-14 03:13:20.007622] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.898 [2024-07-14 03:13:20.007628] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007635] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.898 [2024-07-14 03:13:20.007654] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:24.898 [2024-07-14 03:13:20.007669] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.007685] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.007697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007705] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.007725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.898 [2024-07-14 03:13:20.007746] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.898 [2024-07-14 03:13:20.007930] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.898 [2024-07-14 03:13:20.007946] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.898 [2024-07-14 03:13:20.007953] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007959] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=4096, cccid=4 00:25:24.898 [2024-07-14 03:13:20.007967] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a500) on tqpair(0xdd0eb0): expected_datao=0, payload_size=4096 00:25:24.898 [2024-07-14 03:13:20.007977] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.007985] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008031] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.898 [2024-07-14 03:13:20.008042] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.898 [2024-07-14 03:13:20.008048] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008055] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.898 [2024-07-14 03:13:20.008076] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008122] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.898 [2024-07-14 03:13:20.008132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.898 [2024-07-14 03:13:20.008153] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.898 [2024-07-14 03:13:20.008323] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.898 [2024-07-14 03:13:20.008338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.898 [2024-07-14 03:13:20.008345] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008351] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=4096, cccid=4 00:25:24.898 [2024-07-14 03:13:20.008359] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a500) on tqpair(0xdd0eb0): expected_datao=0, payload_size=4096 00:25:24.898 [2024-07-14 03:13:20.008369] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008377] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008425] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.898 [2024-07-14 03:13:20.008436] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.898 [2024-07-14 03:13:20.008442] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008449] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.898 [2024-07-14 03:13:20.008461] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008475] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008490] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008505] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008514] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008522] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:24.898 [2024-07-14 03:13:20.008530] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:24.898 [2024-07-14 03:13:20.008539] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:24.898 [2024-07-14 03:13:20.008558] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008567] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.898 [2024-07-14 03:13:20.008573] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.008583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.008594] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008608] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.008616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:24.899 [2024-07-14 03:13:20.008641] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.899 [2024-07-14 03:13:20.008653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a660, cid 5, qid 0 00:25:24.899 [2024-07-14 03:13:20.008815] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.008830] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.008837] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.008854] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.008863] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.008878] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a660) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.008900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008918] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.008925] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.008935] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.008956] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a660, cid 5, qid 0 00:25:24.899 [2024-07-14 03:13:20.009113] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.009126] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.009132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a660) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.009154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009181] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009212] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a660, cid 5, qid 0 00:25:24.899 [2024-07-14 03:13:20.009360] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.009375] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.009382] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a660) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.009404] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009413] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009430] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009450] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a660, cid 5, qid 0 00:25:24.899 [2024-07-14 03:13:20.009606] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.009617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.009624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009631] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a660) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.009650] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009660] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009666] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009688] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009696] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009722] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009729] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009756] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009764] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.009770] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdd0eb0) 00:25:24.899 [2024-07-14 03:13:20.009780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.899 [2024-07-14 03:13:20.009801] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a660, cid 5, qid 0 00:25:24.899 [2024-07-14 03:13:20.009812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a500, cid 4, qid 0 00:25:24.899 [2024-07-14 03:13:20.009823] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a7c0, cid 6, qid 0 00:25:24.899 [2024-07-14 03:13:20.009831] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a920, cid 7, qid 0 00:25:24.899 [2024-07-14 03:13:20.010066] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.899 [2024-07-14 03:13:20.010082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.899 [2024-07-14 03:13:20.010088] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010095] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=8192, cccid=5 00:25:24.899 [2024-07-14 03:13:20.010102] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a660) on tqpair(0xdd0eb0): expected_datao=0, payload_size=8192 00:25:24.899 [2024-07-14 03:13:20.010210] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010221] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.899 [2024-07-14 03:13:20.010239] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.899 [2024-07-14 03:13:20.010245] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010252] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=512, cccid=4 00:25:24.899 [2024-07-14 03:13:20.010259] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a500) on tqpair(0xdd0eb0): expected_datao=0, payload_size=512 00:25:24.899 [2024-07-14 03:13:20.010269] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010276] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.899 [2024-07-14 03:13:20.010293] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.899 [2024-07-14 03:13:20.010300] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010306] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=512, cccid=6 00:25:24.899 [2024-07-14 03:13:20.010313] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a7c0) on tqpair(0xdd0eb0): expected_datao=0, payload_size=512 00:25:24.899 [2024-07-14 03:13:20.010323] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010330] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010338] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:24.899 [2024-07-14 03:13:20.010347] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:24.899 [2024-07-14 03:13:20.010353] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010359] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xdd0eb0): datao=0, datal=4096, cccid=7 00:25:24.899 [2024-07-14 03:13:20.010366] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe2a920) on tqpair(0xdd0eb0): expected_datao=0, payload_size=4096 00:25:24.899 [2024-07-14 03:13:20.010377] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010384] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.010405] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.010411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010418] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a660) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.010438] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.010449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.010455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a500) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.010477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.010488] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.010494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a7c0) on tqpair=0xdd0eb0 00:25:24.899 [2024-07-14 03:13:20.010511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.899 [2024-07-14 03:13:20.010520] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.899 [2024-07-14 03:13:20.010527] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.899 [2024-07-14 03:13:20.010533] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a920) on tqpair=0xdd0eb0 00:25:24.899 ===================================================== 00:25:24.899 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:24.899 ===================================================== 00:25:24.899 Controller Capabilities/Features 00:25:24.899 ================================ 00:25:24.899 Vendor ID: 8086 00:25:24.899 Subsystem Vendor ID: 8086 00:25:24.899 Serial Number: SPDK00000000000001 00:25:24.899 Model Number: SPDK bdev Controller 00:25:24.899 Firmware Version: 24.01.1 00:25:24.899 Recommended Arb Burst: 6 00:25:24.899 IEEE OUI Identifier: e4 d2 5c 00:25:24.900 Multi-path I/O 00:25:24.900 May have multiple subsystem ports: Yes 00:25:24.900 May have multiple controllers: Yes 00:25:24.900 Associated with SR-IOV VF: No 00:25:24.900 Max Data Transfer Size: 131072 00:25:24.900 Max Number of Namespaces: 32 00:25:24.900 Max Number of I/O Queues: 127 00:25:24.900 NVMe Specification Version (VS): 1.3 00:25:24.900 NVMe Specification Version (Identify): 1.3 00:25:24.900 Maximum Queue Entries: 128 00:25:24.900 Contiguous Queues Required: Yes 00:25:24.900 Arbitration Mechanisms Supported 00:25:24.900 Weighted Round Robin: Not Supported 00:25:24.900 Vendor Specific: Not Supported 00:25:24.900 Reset Timeout: 15000 ms 00:25:24.900 Doorbell Stride: 4 bytes 00:25:24.900 NVM Subsystem Reset: Not Supported 00:25:24.900 Command Sets Supported 00:25:24.900 NVM Command Set: Supported 00:25:24.900 Boot Partition: Not Supported 00:25:24.900 Memory Page Size Minimum: 4096 bytes 00:25:24.900 Memory Page Size Maximum: 4096 bytes 00:25:24.900 Persistent Memory Region: Not Supported 00:25:24.900 Optional Asynchronous Events Supported 00:25:24.900 Namespace Attribute Notices: Supported 00:25:24.900 Firmware Activation Notices: Not Supported 00:25:24.900 ANA Change Notices: Not Supported 00:25:24.900 PLE Aggregate Log Change Notices: Not Supported 00:25:24.900 LBA Status Info Alert Notices: Not Supported 00:25:24.900 EGE Aggregate Log Change Notices: Not Supported 00:25:24.900 Normal NVM Subsystem Shutdown event: Not Supported 00:25:24.900 Zone Descriptor Change Notices: Not Supported 00:25:24.900 Discovery Log Change Notices: Not Supported 00:25:24.900 Controller Attributes 00:25:24.900 128-bit Host Identifier: Supported 00:25:24.900 Non-Operational Permissive Mode: Not Supported 00:25:24.900 NVM Sets: Not Supported 00:25:24.900 Read Recovery Levels: Not Supported 00:25:24.900 Endurance Groups: Not Supported 00:25:24.900 Predictable Latency Mode: Not Supported 00:25:24.900 Traffic Based Keep ALive: Not Supported 00:25:24.900 Namespace Granularity: Not Supported 00:25:24.900 SQ Associations: Not Supported 00:25:24.900 UUID List: Not Supported 00:25:24.900 Multi-Domain Subsystem: Not Supported 00:25:24.900 Fixed Capacity Management: Not Supported 00:25:24.900 Variable Capacity Management: Not Supported 00:25:24.900 Delete Endurance Group: Not Supported 00:25:24.900 Delete NVM Set: Not Supported 00:25:24.900 Extended LBA Formats Supported: Not Supported 00:25:24.900 Flexible Data Placement Supported: Not Supported 00:25:24.900 00:25:24.900 Controller Memory Buffer Support 00:25:24.900 ================================ 00:25:24.900 Supported: No 00:25:24.900 00:25:24.900 Persistent Memory Region Support 00:25:24.900 ================================ 00:25:24.900 Supported: No 00:25:24.900 00:25:24.900 Admin Command Set Attributes 00:25:24.900 ============================ 00:25:24.900 Security Send/Receive: Not Supported 00:25:24.900 Format NVM: Not Supported 00:25:24.900 Firmware Activate/Download: Not Supported 00:25:24.900 Namespace Management: Not Supported 00:25:24.900 Device Self-Test: Not Supported 00:25:24.900 Directives: Not Supported 00:25:24.900 NVMe-MI: Not Supported 00:25:24.900 Virtualization Management: Not Supported 00:25:24.900 Doorbell Buffer Config: Not Supported 00:25:24.900 Get LBA Status Capability: Not Supported 00:25:24.900 Command & Feature Lockdown Capability: Not Supported 00:25:24.900 Abort Command Limit: 4 00:25:24.900 Async Event Request Limit: 4 00:25:24.900 Number of Firmware Slots: N/A 00:25:24.900 Firmware Slot 1 Read-Only: N/A 00:25:24.900 Firmware Activation Without Reset: N/A 00:25:24.900 Multiple Update Detection Support: N/A 00:25:24.900 Firmware Update Granularity: No Information Provided 00:25:24.900 Per-Namespace SMART Log: No 00:25:24.900 Asymmetric Namespace Access Log Page: Not Supported 00:25:24.900 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:24.900 Command Effects Log Page: Supported 00:25:24.900 Get Log Page Extended Data: Supported 00:25:24.900 Telemetry Log Pages: Not Supported 00:25:24.900 Persistent Event Log Pages: Not Supported 00:25:24.900 Supported Log Pages Log Page: May Support 00:25:24.900 Commands Supported & Effects Log Page: Not Supported 00:25:24.900 Feature Identifiers & Effects Log Page:May Support 00:25:24.900 NVMe-MI Commands & Effects Log Page: May Support 00:25:24.900 Data Area 4 for Telemetry Log: Not Supported 00:25:24.900 Error Log Page Entries Supported: 128 00:25:24.900 Keep Alive: Supported 00:25:24.900 Keep Alive Granularity: 10000 ms 00:25:24.900 00:25:24.900 NVM Command Set Attributes 00:25:24.900 ========================== 00:25:24.900 Submission Queue Entry Size 00:25:24.900 Max: 64 00:25:24.900 Min: 64 00:25:24.900 Completion Queue Entry Size 00:25:24.900 Max: 16 00:25:24.900 Min: 16 00:25:24.900 Number of Namespaces: 32 00:25:24.900 Compare Command: Supported 00:25:24.900 Write Uncorrectable Command: Not Supported 00:25:24.900 Dataset Management Command: Supported 00:25:24.900 Write Zeroes Command: Supported 00:25:24.900 Set Features Save Field: Not Supported 00:25:24.900 Reservations: Supported 00:25:24.900 Timestamp: Not Supported 00:25:24.900 Copy: Supported 00:25:24.900 Volatile Write Cache: Present 00:25:24.900 Atomic Write Unit (Normal): 1 00:25:24.900 Atomic Write Unit (PFail): 1 00:25:24.900 Atomic Compare & Write Unit: 1 00:25:24.900 Fused Compare & Write: Supported 00:25:24.900 Scatter-Gather List 00:25:24.900 SGL Command Set: Supported 00:25:24.900 SGL Keyed: Supported 00:25:24.900 SGL Bit Bucket Descriptor: Not Supported 00:25:24.900 SGL Metadata Pointer: Not Supported 00:25:24.900 Oversized SGL: Not Supported 00:25:24.900 SGL Metadata Address: Not Supported 00:25:24.900 SGL Offset: Supported 00:25:24.900 Transport SGL Data Block: Not Supported 00:25:24.900 Replay Protected Memory Block: Not Supported 00:25:24.900 00:25:24.900 Firmware Slot Information 00:25:24.900 ========================= 00:25:24.900 Active slot: 1 00:25:24.900 Slot 1 Firmware Revision: 24.01.1 00:25:24.900 00:25:24.900 00:25:24.900 Commands Supported and Effects 00:25:24.900 ============================== 00:25:24.900 Admin Commands 00:25:24.900 -------------- 00:25:24.900 Get Log Page (02h): Supported 00:25:24.900 Identify (06h): Supported 00:25:24.900 Abort (08h): Supported 00:25:24.900 Set Features (09h): Supported 00:25:24.900 Get Features (0Ah): Supported 00:25:24.900 Asynchronous Event Request (0Ch): Supported 00:25:24.900 Keep Alive (18h): Supported 00:25:24.900 I/O Commands 00:25:24.900 ------------ 00:25:24.900 Flush (00h): Supported LBA-Change 00:25:24.900 Write (01h): Supported LBA-Change 00:25:24.900 Read (02h): Supported 00:25:24.900 Compare (05h): Supported 00:25:24.900 Write Zeroes (08h): Supported LBA-Change 00:25:24.900 Dataset Management (09h): Supported LBA-Change 00:25:24.900 Copy (19h): Supported LBA-Change 00:25:24.900 Unknown (79h): Supported LBA-Change 00:25:24.900 Unknown (7Ah): Supported 00:25:24.900 00:25:24.900 Error Log 00:25:24.900 ========= 00:25:24.900 00:25:24.900 Arbitration 00:25:24.900 =========== 00:25:24.900 Arbitration Burst: 1 00:25:24.900 00:25:24.900 Power Management 00:25:24.900 ================ 00:25:24.900 Number of Power States: 1 00:25:24.900 Current Power State: Power State #0 00:25:24.900 Power State #0: 00:25:24.900 Max Power: 0.00 W 00:25:24.900 Non-Operational State: Operational 00:25:24.900 Entry Latency: Not Reported 00:25:24.900 Exit Latency: Not Reported 00:25:24.900 Relative Read Throughput: 0 00:25:24.900 Relative Read Latency: 0 00:25:24.900 Relative Write Throughput: 0 00:25:24.900 Relative Write Latency: 0 00:25:24.900 Idle Power: Not Reported 00:25:24.900 Active Power: Not Reported 00:25:24.900 Non-Operational Permissive Mode: Not Supported 00:25:24.900 00:25:24.900 Health Information 00:25:24.900 ================== 00:25:24.900 Critical Warnings: 00:25:24.900 Available Spare Space: OK 00:25:24.900 Temperature: OK 00:25:24.900 Device Reliability: OK 00:25:24.900 Read Only: No 00:25:24.900 Volatile Memory Backup: OK 00:25:24.900 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:24.900 Temperature Threshold: [2024-07-14 03:13:20.010656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.900 [2024-07-14 03:13:20.010669] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.900 [2024-07-14 03:13:20.010676] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xdd0eb0) 00:25:24.900 [2024-07-14 03:13:20.010686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.900 [2024-07-14 03:13:20.010709] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a920, cid 7, qid 0 00:25:24.900 [2024-07-14 03:13:20.014893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.900 [2024-07-14 03:13:20.014911] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.900 [2024-07-14 03:13:20.014918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.900 [2024-07-14 03:13:20.014936] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a920) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.014980] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:24.901 [2024-07-14 03:13:20.015003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.901 [2024-07-14 03:13:20.015015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.901 [2024-07-14 03:13:20.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.901 [2024-07-14 03:13:20.015046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:24.901 [2024-07-14 03:13:20.015065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015077] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015086] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.015100] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.015131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.015320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.015338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.015349] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015359] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.015375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015386] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.015408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.015447] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.015649] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.015667] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.015676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.015699] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:24.901 [2024-07-14 03:13:20.015711] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:24.901 [2024-07-14 03:13:20.015735] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.015758] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.015772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.015802] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.016003] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.016022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.016036] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016051] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.016072] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.016108] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.016136] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.016322] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.016340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.016350] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.016397] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016419] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.016433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.016465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.016672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.016693] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.016705] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016716] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.016741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016753] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.016760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.016777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.016811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.017005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.017025] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.017033] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017039] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.017057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017066] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.017083] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.017105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.017266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.017278] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.017284] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017291] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.017306] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017315] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017321] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.017332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.017351] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.017513] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.017534] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.017545] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017552] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.017570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017579] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017586] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.017596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.017617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.017758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.017770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.017777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017783] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.901 [2024-07-14 03:13:20.017799] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017808] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.901 [2024-07-14 03:13:20.017815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.901 [2024-07-14 03:13:20.017825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.901 [2024-07-14 03:13:20.017850] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.901 [2024-07-14 03:13:20.018009] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.901 [2024-07-14 03:13:20.018022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.901 [2024-07-14 03:13:20.018029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.018051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018061] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018067] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.902 [2024-07-14 03:13:20.018077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.902 [2024-07-14 03:13:20.018098] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.902 [2024-07-14 03:13:20.018237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.902 [2024-07-14 03:13:20.018249] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.902 [2024-07-14 03:13:20.018255] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018262] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.018277] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018286] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018293] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.902 [2024-07-14 03:13:20.018303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.902 [2024-07-14 03:13:20.018323] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.902 [2024-07-14 03:13:20.018483] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.902 [2024-07-14 03:13:20.018498] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.902 [2024-07-14 03:13:20.018504] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018511] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.018527] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018536] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018543] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.902 [2024-07-14 03:13:20.018553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.902 [2024-07-14 03:13:20.018574] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.902 [2024-07-14 03:13:20.018728] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.902 [2024-07-14 03:13:20.018740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.902 [2024-07-14 03:13:20.018747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.018770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018779] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.018785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.902 [2024-07-14 03:13:20.018795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.902 [2024-07-14 03:13:20.018819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.902 [2024-07-14 03:13:20.022882] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.902 [2024-07-14 03:13:20.022910] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.902 [2024-07-14 03:13:20.022918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.022925] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.022942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.022952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.022959] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xdd0eb0) 00:25:24.902 [2024-07-14 03:13:20.022970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.902 [2024-07-14 03:13:20.022992] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe2a3a0, cid 3, qid 0 00:25:24.902 [2024-07-14 03:13:20.023171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:24.902 [2024-07-14 03:13:20.023183] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:24.902 [2024-07-14 03:13:20.023190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:24.902 [2024-07-14 03:13:20.023197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xe2a3a0) on tqpair=0xdd0eb0 00:25:24.902 [2024-07-14 03:13:20.023210] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:25:24.902 0 Kelvin (-273 Celsius) 00:25:24.902 Available Spare: 0% 00:25:24.902 Available Spare Threshold: 0% 00:25:24.902 Life Percentage Used: 0% 00:25:24.902 Data Units Read: 0 00:25:24.902 Data Units Written: 0 00:25:24.902 Host Read Commands: 0 00:25:24.902 Host Write Commands: 0 00:25:24.902 Controller Busy Time: 0 minutes 00:25:24.902 Power Cycles: 0 00:25:24.902 Power On Hours: 0 hours 00:25:24.902 Unsafe Shutdowns: 0 00:25:24.902 Unrecoverable Media Errors: 0 00:25:24.902 Lifetime Error Log Entries: 0 00:25:24.902 Warning Temperature Time: 0 minutes 00:25:24.902 Critical Temperature Time: 0 minutes 00:25:24.902 00:25:24.902 Number of Queues 00:25:24.902 ================ 00:25:24.902 Number of I/O Submission Queues: 127 00:25:24.902 Number of I/O Completion Queues: 127 00:25:24.902 00:25:24.902 Active Namespaces 00:25:24.902 ================= 00:25:24.902 Namespace ID:1 00:25:24.902 Error Recovery Timeout: Unlimited 00:25:24.902 Command Set Identifier: NVM (00h) 00:25:24.902 Deallocate: Supported 00:25:24.902 Deallocated/Unwritten Error: Not Supported 00:25:24.902 Deallocated Read Value: Unknown 00:25:24.902 Deallocate in Write Zeroes: Not Supported 00:25:24.902 Deallocated Guard Field: 0xFFFF 00:25:24.902 Flush: Supported 00:25:24.902 Reservation: Supported 00:25:24.902 Namespace Sharing Capabilities: Multiple Controllers 00:25:24.902 Size (in LBAs): 131072 (0GiB) 00:25:24.902 Capacity (in LBAs): 131072 (0GiB) 00:25:24.902 Utilization (in LBAs): 131072 (0GiB) 00:25:24.902 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:24.902 EUI64: ABCDEF0123456789 00:25:24.902 UUID: e1946540-36d0-4703-8818-98c96dca0969 00:25:24.902 Thin Provisioning: Not Supported 00:25:24.902 Per-NS Atomic Units: Yes 00:25:24.902 Atomic Boundary Size (Normal): 0 00:25:24.902 Atomic Boundary Size (PFail): 0 00:25:24.902 Atomic Boundary Offset: 0 00:25:24.902 Maximum Single Source Range Length: 65535 00:25:24.902 Maximum Copy Length: 65535 00:25:24.902 Maximum Source Range Count: 1 00:25:24.902 NGUID/EUI64 Never Reused: No 00:25:24.902 Namespace Write Protected: No 00:25:24.902 Number of LBA Formats: 1 00:25:24.902 Current LBA Format: LBA Format #00 00:25:24.902 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:24.902 00:25:24.902 03:13:20 -- host/identify.sh@51 -- # sync 00:25:24.902 03:13:20 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:24.902 03:13:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.902 03:13:20 -- common/autotest_common.sh@10 -- # set +x 00:25:24.902 03:13:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.902 03:13:20 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:24.902 03:13:20 -- host/identify.sh@56 -- # nvmftestfini 00:25:24.902 03:13:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:24.902 03:13:20 -- nvmf/common.sh@116 -- # sync 00:25:24.902 03:13:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:24.902 03:13:20 -- nvmf/common.sh@119 -- # set +e 00:25:24.902 03:13:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:24.902 03:13:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:24.902 rmmod nvme_tcp 00:25:24.902 rmmod nvme_fabrics 00:25:24.902 rmmod nvme_keyring 00:25:24.902 03:13:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:24.902 03:13:20 -- nvmf/common.sh@123 -- # set -e 00:25:24.902 03:13:20 -- nvmf/common.sh@124 -- # return 0 00:25:24.902 03:13:20 -- nvmf/common.sh@477 -- # '[' -n 2086849 ']' 00:25:24.902 03:13:20 -- nvmf/common.sh@478 -- # killprocess 2086849 00:25:24.902 03:13:20 -- common/autotest_common.sh@926 -- # '[' -z 2086849 ']' 00:25:24.902 03:13:20 -- common/autotest_common.sh@930 -- # kill -0 2086849 00:25:24.902 03:13:20 -- common/autotest_common.sh@931 -- # uname 00:25:24.902 03:13:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:24.902 03:13:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2086849 00:25:24.902 03:13:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:24.902 03:13:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:24.902 03:13:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2086849' 00:25:24.902 killing process with pid 2086849 00:25:24.902 03:13:20 -- common/autotest_common.sh@945 -- # kill 2086849 00:25:24.902 [2024-07-14 03:13:20.141937] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:24.902 03:13:20 -- common/autotest_common.sh@950 -- # wait 2086849 00:25:25.161 03:13:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:25.161 03:13:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:25.161 03:13:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:25.161 03:13:20 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:25.161 03:13:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:25.161 03:13:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:25.162 03:13:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:25.162 03:13:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.693 03:13:22 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:27.693 00:25:27.693 real 0m5.824s 00:25:27.693 user 0m6.854s 00:25:27.693 sys 0m1.826s 00:25:27.693 03:13:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:27.693 03:13:22 -- common/autotest_common.sh@10 -- # set +x 00:25:27.693 ************************************ 00:25:27.693 END TEST nvmf_identify 00:25:27.693 ************************************ 00:25:27.693 03:13:22 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:27.693 03:13:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:27.693 03:13:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:27.693 03:13:22 -- common/autotest_common.sh@10 -- # set +x 00:25:27.693 ************************************ 00:25:27.693 START TEST nvmf_perf 00:25:27.693 ************************************ 00:25:27.693 03:13:22 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:27.693 * Looking for test storage... 00:25:27.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:27.693 03:13:22 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:27.693 03:13:22 -- nvmf/common.sh@7 -- # uname -s 00:25:27.693 03:13:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:27.693 03:13:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:27.693 03:13:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:27.693 03:13:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:27.693 03:13:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:27.693 03:13:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:27.693 03:13:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:27.693 03:13:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:27.693 03:13:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:27.693 03:13:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:27.693 03:13:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.693 03:13:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:27.693 03:13:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:27.693 03:13:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:27.693 03:13:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:27.693 03:13:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:27.693 03:13:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:27.693 03:13:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:27.693 03:13:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:27.694 03:13:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.694 03:13:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.694 03:13:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.694 03:13:22 -- paths/export.sh@5 -- # export PATH 00:25:27.694 03:13:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:27.694 03:13:22 -- nvmf/common.sh@46 -- # : 0 00:25:27.694 03:13:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:27.694 03:13:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:27.694 03:13:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:27.694 03:13:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:27.694 03:13:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:27.694 03:13:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:27.694 03:13:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:27.694 03:13:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:27.694 03:13:22 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:27.694 03:13:22 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:27.694 03:13:22 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:27.694 03:13:22 -- host/perf.sh@17 -- # nvmftestinit 00:25:27.694 03:13:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:27.694 03:13:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:27.694 03:13:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:27.694 03:13:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:27.694 03:13:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:27.694 03:13:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.694 03:13:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.694 03:13:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:27.694 03:13:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:27.694 03:13:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:27.694 03:13:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:27.694 03:13:22 -- common/autotest_common.sh@10 -- # set +x 00:25:29.595 03:13:24 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:29.595 03:13:24 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:29.595 03:13:24 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:29.595 03:13:24 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:29.595 03:13:24 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:29.595 03:13:24 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:29.595 03:13:24 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:29.595 03:13:24 -- nvmf/common.sh@294 -- # net_devs=() 00:25:29.595 03:13:24 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:29.595 03:13:24 -- nvmf/common.sh@295 -- # e810=() 00:25:29.595 03:13:24 -- nvmf/common.sh@295 -- # local -ga e810 00:25:29.595 03:13:24 -- nvmf/common.sh@296 -- # x722=() 00:25:29.595 03:13:24 -- nvmf/common.sh@296 -- # local -ga x722 00:25:29.595 03:13:24 -- nvmf/common.sh@297 -- # mlx=() 00:25:29.595 03:13:24 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:29.595 03:13:24 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:29.595 03:13:24 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.595 03:13:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:29.595 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:29.595 03:13:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:29.595 03:13:24 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:29.595 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:29.595 03:13:24 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.595 03:13:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.595 03:13:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.595 03:13:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:29.595 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:29.595 03:13:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:29.595 03:13:24 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:29.595 03:13:24 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:29.595 03:13:24 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:29.595 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:29.595 03:13:24 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:29.595 03:13:24 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:29.595 03:13:24 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:29.595 03:13:24 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:29.595 03:13:24 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:29.595 03:13:24 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:29.595 03:13:24 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:29.595 03:13:24 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:29.595 03:13:24 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:29.595 03:13:24 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:29.595 03:13:24 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:29.595 03:13:24 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:29.595 03:13:24 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:29.595 03:13:24 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:29.595 03:13:24 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:29.595 03:13:24 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:29.595 03:13:24 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:29.595 03:13:24 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:29.595 03:13:24 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:29.595 03:13:24 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:29.595 03:13:24 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:29.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:29.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:29.595 00:25:29.595 --- 10.0.0.2 ping statistics --- 00:25:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.595 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:29.595 03:13:24 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:29.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:29.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:25:29.595 00:25:29.595 --- 10.0.0.1 ping statistics --- 00:25:29.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:29.595 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:29.595 03:13:24 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:29.595 03:13:24 -- nvmf/common.sh@410 -- # return 0 00:25:29.595 03:13:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:29.595 03:13:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:29.595 03:13:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:29.595 03:13:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:29.595 03:13:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:29.595 03:13:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:29.595 03:13:24 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:29.595 03:13:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:29.595 03:13:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:29.595 03:13:24 -- common/autotest_common.sh@10 -- # set +x 00:25:29.595 03:13:24 -- nvmf/common.sh@469 -- # nvmfpid=2089027 00:25:29.595 03:13:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:29.595 03:13:24 -- nvmf/common.sh@470 -- # waitforlisten 2089027 00:25:29.595 03:13:24 -- common/autotest_common.sh@819 -- # '[' -z 2089027 ']' 00:25:29.595 03:13:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.595 03:13:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:29.595 03:13:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.595 03:13:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:29.595 03:13:24 -- common/autotest_common.sh@10 -- # set +x 00:25:29.595 [2024-07-14 03:13:24.805202] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:25:29.595 [2024-07-14 03:13:24.805276] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:29.596 EAL: No free 2048 kB hugepages reported on node 1 00:25:29.853 [2024-07-14 03:13:24.872467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:29.853 [2024-07-14 03:13:24.959327] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:29.853 [2024-07-14 03:13:24.959471] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:29.853 [2024-07-14 03:13:24.959488] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:29.853 [2024-07-14 03:13:24.959500] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:29.853 [2024-07-14 03:13:24.959553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.853 [2024-07-14 03:13:24.959613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.853 [2024-07-14 03:13:24.959679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:29.853 [2024-07-14 03:13:24.959681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.786 03:13:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:30.786 03:13:25 -- common/autotest_common.sh@852 -- # return 0 00:25:30.786 03:13:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:30.786 03:13:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:30.786 03:13:25 -- common/autotest_common.sh@10 -- # set +x 00:25:30.786 03:13:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:30.786 03:13:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:30.786 03:13:25 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:34.119 03:13:28 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:34.119 03:13:28 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:34.119 03:13:29 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:34.119 03:13:29 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:34.378 03:13:29 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:34.378 03:13:29 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:34.378 03:13:29 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:34.378 03:13:29 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:34.378 03:13:29 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:34.378 [2024-07-14 03:13:29.601086] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.378 03:13:29 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:34.635 03:13:29 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:34.635 03:13:29 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:34.893 03:13:30 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:34.893 03:13:30 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:35.151 03:13:30 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.409 [2024-07-14 03:13:30.564676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.409 03:13:30 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:35.667 03:13:30 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:35.667 03:13:30 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:35.667 03:13:30 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:35.667 03:13:30 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:37.037 Initializing NVMe Controllers 00:25:37.037 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:37.037 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:37.037 Initialization complete. Launching workers. 00:25:37.037 ======================================================== 00:25:37.037 Latency(us) 00:25:37.037 Device Information : IOPS MiB/s Average min max 00:25:37.037 PCIE (0000:88:00.0) NSID 1 from core 0: 86617.12 338.35 368.94 39.46 5243.24 00:25:37.037 ======================================================== 00:25:37.037 Total : 86617.12 338.35 368.94 39.46 5243.24 00:25:37.037 00:25:37.037 03:13:32 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:37.037 EAL: No free 2048 kB hugepages reported on node 1 00:25:37.970 Initializing NVMe Controllers 00:25:37.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:37.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:37.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:37.970 Initialization complete. Launching workers. 00:25:37.970 ======================================================== 00:25:37.970 Latency(us) 00:25:37.970 Device Information : IOPS MiB/s Average min max 00:25:37.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.75 0.38 10450.12 202.89 45690.55 00:25:37.970 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.85 0.23 17559.63 7938.19 47901.95 00:25:37.970 ======================================================== 00:25:37.970 Total : 154.61 0.60 13110.45 202.89 47901.95 00:25:37.970 00:25:37.970 03:13:33 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:38.228 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.599 Initializing NVMe Controllers 00:25:39.599 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:39.599 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:39.599 Initialization complete. Launching workers. 00:25:39.599 ======================================================== 00:25:39.599 Latency(us) 00:25:39.599 Device Information : IOPS MiB/s Average min max 00:25:39.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8090.70 31.60 3955.29 727.97 10781.08 00:25:39.599 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3862.99 15.09 8305.44 5787.72 16442.01 00:25:39.599 ======================================================== 00:25:39.599 Total : 11953.69 46.69 5361.10 727.97 16442.01 00:25:39.599 00:25:39.599 03:13:34 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:39.599 03:13:34 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:39.599 03:13:34 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:39.599 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.127 Initializing NVMe Controllers 00:25:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.127 Controller IO queue size 128, less than required. 00:25:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.127 Controller IO queue size 128, less than required. 00:25:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:42.127 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:42.127 Initialization complete. Launching workers. 00:25:42.127 ======================================================== 00:25:42.127 Latency(us) 00:25:42.127 Device Information : IOPS MiB/s Average min max 00:25:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 776.54 194.13 170668.44 118253.51 258547.31 00:25:42.127 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 582.28 145.57 226259.77 59181.29 327284.89 00:25:42.127 ======================================================== 00:25:42.127 Total : 1358.81 339.70 194490.38 59181.29 327284.89 00:25:42.127 00:25:42.127 03:13:36 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:42.127 EAL: No free 2048 kB hugepages reported on node 1 00:25:42.127 No valid NVMe controllers or AIO or URING devices found 00:25:42.127 Initializing NVMe Controllers 00:25:42.127 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:42.127 Controller IO queue size 128, less than required. 00:25:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.127 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:42.127 Controller IO queue size 128, less than required. 00:25:42.127 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:42.127 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:42.127 WARNING: Some requested NVMe devices were skipped 00:25:42.127 03:13:37 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:42.127 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.654 Initializing NVMe Controllers 00:25:44.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.654 Controller IO queue size 128, less than required. 00:25:44.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.654 Controller IO queue size 128, less than required. 00:25:44.654 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:44.654 Initialization complete. Launching workers. 00:25:44.654 00:25:44.654 ==================== 00:25:44.654 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:44.654 TCP transport: 00:25:44.654 polls: 27915 00:25:44.654 idle_polls: 16069 00:25:44.654 sock_completions: 11846 00:25:44.654 nvme_completions: 3393 00:25:44.654 submitted_requests: 5251 00:25:44.654 queued_requests: 1 00:25:44.654 00:25:44.654 ==================== 00:25:44.654 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:44.654 TCP transport: 00:25:44.654 polls: 28257 00:25:44.654 idle_polls: 11334 00:25:44.654 sock_completions: 16923 00:25:44.654 nvme_completions: 2806 00:25:44.654 submitted_requests: 4412 00:25:44.654 queued_requests: 1 00:25:44.654 ======================================================== 00:25:44.654 Latency(us) 00:25:44.654 Device Information : IOPS MiB/s Average min max 00:25:44.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 911.97 227.99 146404.27 78662.30 241298.00 00:25:44.654 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 764.98 191.24 170057.06 96069.63 214977.27 00:25:44.654 ======================================================== 00:25:44.654 Total : 1676.95 419.24 157194.01 78662.30 241298.00 00:25:44.654 00:25:44.654 03:13:39 -- host/perf.sh@66 -- # sync 00:25:44.654 03:13:39 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.912 03:13:40 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:44.912 03:13:40 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:25:44.912 03:13:40 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:49.088 03:13:43 -- host/perf.sh@72 -- # ls_guid=f0b00a8c-802e-4243-ab3b-3e3bb53b8a95 00:25:49.088 03:13:43 -- host/perf.sh@73 -- # get_lvs_free_mb f0b00a8c-802e-4243-ab3b-3e3bb53b8a95 00:25:49.088 03:13:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f0b00a8c-802e-4243-ab3b-3e3bb53b8a95 00:25:49.088 03:13:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:49.088 03:13:43 -- common/autotest_common.sh@1345 -- # local fc 00:25:49.088 03:13:43 -- common/autotest_common.sh@1346 -- # local cs 00:25:49.088 03:13:43 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:49.088 03:13:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:49.088 { 00:25:49.088 "uuid": "f0b00a8c-802e-4243-ab3b-3e3bb53b8a95", 00:25:49.088 "name": "lvs_0", 00:25:49.088 "base_bdev": "Nvme0n1", 00:25:49.088 "total_data_clusters": 238234, 00:25:49.088 "free_clusters": 238234, 00:25:49.088 "block_size": 512, 00:25:49.088 "cluster_size": 4194304 00:25:49.088 } 00:25:49.088 ]' 00:25:49.088 03:13:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f0b00a8c-802e-4243-ab3b-3e3bb53b8a95") .free_clusters' 00:25:49.088 03:13:43 -- common/autotest_common.sh@1348 -- # fc=238234 00:25:49.088 03:13:43 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f0b00a8c-802e-4243-ab3b-3e3bb53b8a95") .cluster_size' 00:25:49.088 03:13:43 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:49.088 03:13:43 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:25:49.088 03:13:43 -- common/autotest_common.sh@1353 -- # echo 952936 00:25:49.088 952936 00:25:49.088 03:13:43 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:49.088 03:13:43 -- host/perf.sh@78 -- # free_mb=20480 00:25:49.088 03:13:43 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f0b00a8c-802e-4243-ab3b-3e3bb53b8a95 lbd_0 20480 00:25:49.088 03:13:44 -- host/perf.sh@80 -- # lb_guid=c0db217c-7a5a-42aa-983e-0ef6b4b02e0e 00:25:49.088 03:13:44 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c0db217c-7a5a-42aa-983e-0ef6b4b02e0e lvs_n_0 00:25:50.019 03:13:45 -- host/perf.sh@83 -- # ls_nested_guid=79060531-a560-472f-acbd-68a2e7af88f6 00:25:50.019 03:13:45 -- host/perf.sh@84 -- # get_lvs_free_mb 79060531-a560-472f-acbd-68a2e7af88f6 00:25:50.019 03:13:45 -- common/autotest_common.sh@1343 -- # local lvs_uuid=79060531-a560-472f-acbd-68a2e7af88f6 00:25:50.019 03:13:45 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:50.019 03:13:45 -- common/autotest_common.sh@1345 -- # local fc 00:25:50.019 03:13:45 -- common/autotest_common.sh@1346 -- # local cs 00:25:50.019 03:13:45 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:50.276 03:13:45 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:50.276 { 00:25:50.276 "uuid": "f0b00a8c-802e-4243-ab3b-3e3bb53b8a95", 00:25:50.276 "name": "lvs_0", 00:25:50.276 "base_bdev": "Nvme0n1", 00:25:50.276 "total_data_clusters": 238234, 00:25:50.276 "free_clusters": 233114, 00:25:50.276 "block_size": 512, 00:25:50.276 "cluster_size": 4194304 00:25:50.276 }, 00:25:50.276 { 00:25:50.276 "uuid": "79060531-a560-472f-acbd-68a2e7af88f6", 00:25:50.276 "name": "lvs_n_0", 00:25:50.276 "base_bdev": "c0db217c-7a5a-42aa-983e-0ef6b4b02e0e", 00:25:50.276 "total_data_clusters": 5114, 00:25:50.276 "free_clusters": 5114, 00:25:50.276 "block_size": 512, 00:25:50.276 "cluster_size": 4194304 00:25:50.276 } 00:25:50.276 ]' 00:25:50.276 03:13:45 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="79060531-a560-472f-acbd-68a2e7af88f6") .free_clusters' 00:25:50.276 03:13:45 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:50.276 03:13:45 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="79060531-a560-472f-acbd-68a2e7af88f6") .cluster_size' 00:25:50.276 03:13:45 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:50.276 03:13:45 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:50.276 03:13:45 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:50.276 20456 00:25:50.276 03:13:45 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:50.276 03:13:45 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 79060531-a560-472f-acbd-68a2e7af88f6 lbd_nest_0 20456 00:25:50.532 03:13:45 -- host/perf.sh@88 -- # lb_nested_guid=09d6bc3b-c832-4330-9ae4-a82f0dd89b86 00:25:50.532 03:13:45 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.790 03:13:45 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:50.790 03:13:45 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 09d6bc3b-c832-4330-9ae4-a82f0dd89b86 00:25:51.047 03:13:46 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.303 03:13:46 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:51.303 03:13:46 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:51.303 03:13:46 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:51.303 03:13:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:51.303 03:13:46 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:51.303 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.524 Initializing NVMe Controllers 00:26:03.524 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.524 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.524 Initialization complete. Launching workers. 00:26:03.524 ======================================================== 00:26:03.524 Latency(us) 00:26:03.524 Device Information : IOPS MiB/s Average min max 00:26:03.524 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.68 0.02 21437.33 236.20 46887.22 00:26:03.524 ======================================================== 00:26:03.524 Total : 46.68 0.02 21437.33 236.20 46887.22 00:26:03.524 00:26:03.524 03:13:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:03.524 03:13:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:03.524 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.482 Initializing NVMe Controllers 00:26:13.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:13.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:13.482 Initialization complete. Launching workers. 00:26:13.482 ======================================================== 00:26:13.482 Latency(us) 00:26:13.482 Device Information : IOPS MiB/s Average min max 00:26:13.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.79 9.85 12691.74 5975.78 47895.97 00:26:13.482 ======================================================== 00:26:13.482 Total : 78.79 9.85 12691.74 5975.78 47895.97 00:26:13.482 00:26:13.482 03:14:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:13.482 03:14:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:13.482 03:14:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:13.482 EAL: No free 2048 kB hugepages reported on node 1 00:26:23.465 Initializing NVMe Controllers 00:26:23.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:23.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:23.465 Initialization complete. Launching workers. 00:26:23.465 ======================================================== 00:26:23.465 Latency(us) 00:26:23.465 Device Information : IOPS MiB/s Average min max 00:26:23.465 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7051.30 3.44 4548.33 328.15 47872.35 00:26:23.465 ======================================================== 00:26:23.465 Total : 7051.30 3.44 4548.33 328.15 47872.35 00:26:23.465 00:26:23.465 03:14:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:23.465 03:14:17 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:23.465 EAL: No free 2048 kB hugepages reported on node 1 00:26:33.430 Initializing NVMe Controllers 00:26:33.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.430 Initialization complete. Launching workers. 00:26:33.430 ======================================================== 00:26:33.430 Latency(us) 00:26:33.430 Device Information : IOPS MiB/s Average min max 00:26:33.430 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1736.10 217.01 18453.36 1134.06 38231.63 00:26:33.430 ======================================================== 00:26:33.430 Total : 1736.10 217.01 18453.36 1134.06 38231.63 00:26:33.430 00:26:33.430 03:14:27 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:33.430 03:14:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:33.430 03:14:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:33.430 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.424 Initializing NVMe Controllers 00:26:43.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:43.424 Controller IO queue size 128, less than required. 00:26:43.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:43.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:43.424 Initialization complete. Launching workers. 00:26:43.424 ======================================================== 00:26:43.424 Latency(us) 00:26:43.424 Device Information : IOPS MiB/s Average min max 00:26:43.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10596.79 5.17 12084.26 1723.52 27226.43 00:26:43.424 ======================================================== 00:26:43.424 Total : 10596.79 5.17 12084.26 1723.52 27226.43 00:26:43.424 00:26:43.424 03:14:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:43.424 03:14:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:43.424 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.387 Initializing NVMe Controllers 00:26:53.387 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:53.387 Controller IO queue size 128, less than required. 00:26:53.387 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:53.387 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:53.387 Initialization complete. Launching workers. 00:26:53.387 ======================================================== 00:26:53.387 Latency(us) 00:26:53.387 Device Information : IOPS MiB/s Average min max 00:26:53.387 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1199.81 149.98 107439.50 14889.66 208410.02 00:26:53.387 ======================================================== 00:26:53.387 Total : 1199.81 149.98 107439.50 14889.66 208410.02 00:26:53.387 00:26:53.387 03:14:48 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:53.645 03:14:48 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09d6bc3b-c832-4330-9ae4-a82f0dd89b86 00:26:54.575 03:14:49 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:54.575 03:14:49 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c0db217c-7a5a-42aa-983e-0ef6b4b02e0e 00:26:55.139 03:14:50 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:55.139 03:14:50 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:55.139 03:14:50 -- host/perf.sh@114 -- # nvmftestfini 00:26:55.139 03:14:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:55.139 03:14:50 -- nvmf/common.sh@116 -- # sync 00:26:55.139 03:14:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:55.139 03:14:50 -- nvmf/common.sh@119 -- # set +e 00:26:55.139 03:14:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:55.139 03:14:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:55.139 rmmod nvme_tcp 00:26:55.139 rmmod nvme_fabrics 00:26:55.139 rmmod nvme_keyring 00:26:55.397 03:14:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:55.397 03:14:50 -- nvmf/common.sh@123 -- # set -e 00:26:55.397 03:14:50 -- nvmf/common.sh@124 -- # return 0 00:26:55.397 03:14:50 -- nvmf/common.sh@477 -- # '[' -n 2089027 ']' 00:26:55.397 03:14:50 -- nvmf/common.sh@478 -- # killprocess 2089027 00:26:55.397 03:14:50 -- common/autotest_common.sh@926 -- # '[' -z 2089027 ']' 00:26:55.397 03:14:50 -- common/autotest_common.sh@930 -- # kill -0 2089027 00:26:55.397 03:14:50 -- common/autotest_common.sh@931 -- # uname 00:26:55.397 03:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:55.397 03:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2089027 00:26:55.397 03:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:55.397 03:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:55.397 03:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2089027' 00:26:55.397 killing process with pid 2089027 00:26:55.397 03:14:50 -- common/autotest_common.sh@945 -- # kill 2089027 00:26:55.397 03:14:50 -- common/autotest_common.sh@950 -- # wait 2089027 00:26:57.295 03:14:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:57.295 03:14:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:57.295 03:14:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:57.295 03:14:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:57.295 03:14:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:57.295 03:14:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:57.295 03:14:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:57.295 03:14:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.199 03:14:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:59.199 00:26:59.199 real 1m31.621s 00:26:59.199 user 5m35.500s 00:26:59.199 sys 0m16.362s 00:26:59.199 03:14:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:59.199 03:14:54 -- common/autotest_common.sh@10 -- # set +x 00:26:59.199 ************************************ 00:26:59.199 END TEST nvmf_perf 00:26:59.199 ************************************ 00:26:59.199 03:14:54 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:59.199 03:14:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:59.199 03:14:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:59.199 03:14:54 -- common/autotest_common.sh@10 -- # set +x 00:26:59.199 ************************************ 00:26:59.199 START TEST nvmf_fio_host 00:26:59.199 ************************************ 00:26:59.199 03:14:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:59.199 * Looking for test storage... 00:26:59.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:59.199 03:14:54 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.199 03:14:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.199 03:14:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.199 03:14:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.199 03:14:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.199 03:14:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.199 03:14:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.199 03:14:54 -- paths/export.sh@5 -- # export PATH 00:26:59.199 03:14:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.199 03:14:54 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:59.199 03:14:54 -- nvmf/common.sh@7 -- # uname -s 00:26:59.199 03:14:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:59.199 03:14:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:59.199 03:14:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:59.199 03:14:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:59.199 03:14:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:59.199 03:14:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:59.199 03:14:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:59.199 03:14:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:59.199 03:14:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:59.199 03:14:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:59.199 03:14:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.199 03:14:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:59.199 03:14:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:59.200 03:14:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:59.200 03:14:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:59.200 03:14:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:59.200 03:14:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:59.200 03:14:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:59.200 03:14:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:59.200 03:14:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.200 03:14:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.200 03:14:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.200 03:14:54 -- paths/export.sh@5 -- # export PATH 00:26:59.200 03:14:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:59.200 03:14:54 -- nvmf/common.sh@46 -- # : 0 00:26:59.200 03:14:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:59.200 03:14:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:59.200 03:14:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:59.200 03:14:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:59.200 03:14:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:59.200 03:14:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:59.200 03:14:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:59.200 03:14:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:59.200 03:14:54 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:59.200 03:14:54 -- host/fio.sh@14 -- # nvmftestinit 00:26:59.200 03:14:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:59.200 03:14:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:59.200 03:14:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:59.200 03:14:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:59.200 03:14:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:59.200 03:14:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:59.200 03:14:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:59.200 03:14:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:59.200 03:14:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:59.200 03:14:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:59.200 03:14:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:59.200 03:14:54 -- common/autotest_common.sh@10 -- # set +x 00:27:01.100 03:14:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:01.100 03:14:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:01.100 03:14:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:01.100 03:14:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:01.100 03:14:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:01.100 03:14:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:01.100 03:14:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:01.100 03:14:56 -- nvmf/common.sh@294 -- # net_devs=() 00:27:01.100 03:14:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:01.100 03:14:56 -- nvmf/common.sh@295 -- # e810=() 00:27:01.100 03:14:56 -- nvmf/common.sh@295 -- # local -ga e810 00:27:01.100 03:14:56 -- nvmf/common.sh@296 -- # x722=() 00:27:01.100 03:14:56 -- nvmf/common.sh@296 -- # local -ga x722 00:27:01.100 03:14:56 -- nvmf/common.sh@297 -- # mlx=() 00:27:01.100 03:14:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:01.100 03:14:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.100 03:14:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:01.100 03:14:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:01.100 03:14:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:01.100 03:14:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.100 03:14:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:01.100 03:14:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.100 03:14:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:01.100 03:14:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.100 03:14:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.100 03:14:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.100 03:14:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.100 03:14:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:01.100 03:14:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.100 03:14:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.100 03:14:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.100 03:14:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.100 03:14:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:01.100 03:14:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:01.100 03:14:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:01.100 03:14:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.100 03:14:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.100 03:14:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.100 03:14:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:01.100 03:14:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.100 03:14:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.100 03:14:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:01.100 03:14:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.100 03:14:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.100 03:14:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:01.100 03:14:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:01.100 03:14:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.100 03:14:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.100 03:14:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.100 03:14:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.100 03:14:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:01.100 03:14:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.100 03:14:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.100 03:14:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.100 03:14:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:01.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:27:01.100 00:27:01.100 --- 10.0.0.2 ping statistics --- 00:27:01.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.100 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:27:01.100 03:14:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:27:01.100 00:27:01.100 --- 10.0.0.1 ping statistics --- 00:27:01.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.101 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:27:01.101 03:14:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.101 03:14:56 -- nvmf/common.sh@410 -- # return 0 00:27:01.101 03:14:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:01.101 03:14:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.101 03:14:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:01.101 03:14:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:01.101 03:14:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.101 03:14:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:01.101 03:14:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:01.101 03:14:56 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:01.101 03:14:56 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:01.101 03:14:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:01.101 03:14:56 -- common/autotest_common.sh@10 -- # set +x 00:27:01.101 03:14:56 -- host/fio.sh@24 -- # nvmfpid=2101410 00:27:01.101 03:14:56 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:01.101 03:14:56 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:01.101 03:14:56 -- host/fio.sh@28 -- # waitforlisten 2101410 00:27:01.101 03:14:56 -- common/autotest_common.sh@819 -- # '[' -z 2101410 ']' 00:27:01.101 03:14:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.101 03:14:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:01.101 03:14:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.101 03:14:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:01.101 03:14:56 -- common/autotest_common.sh@10 -- # set +x 00:27:01.359 [2024-07-14 03:14:56.371538] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:01.359 [2024-07-14 03:14:56.371607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.359 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.359 [2024-07-14 03:14:56.441203] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.359 [2024-07-14 03:14:56.532247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:01.359 [2024-07-14 03:14:56.532387] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.359 [2024-07-14 03:14:56.532405] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.359 [2024-07-14 03:14:56.532418] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.359 [2024-07-14 03:14:56.532466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.359 [2024-07-14 03:14:56.532493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.359 [2024-07-14 03:14:56.532551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.359 [2024-07-14 03:14:56.532554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.293 03:14:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:02.293 03:14:57 -- common/autotest_common.sh@852 -- # return 0 00:27:02.293 03:14:57 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:02.293 [2024-07-14 03:14:57.540113] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.550 03:14:57 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:02.550 03:14:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:02.550 03:14:57 -- common/autotest_common.sh@10 -- # set +x 00:27:02.550 03:14:57 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:02.808 Malloc1 00:27:02.808 03:14:57 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:03.102 03:14:58 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:03.102 03:14:58 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:03.360 [2024-07-14 03:14:58.555878] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:03.360 03:14:58 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:03.618 03:14:58 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:03.618 03:14:58 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:03.618 03:14:58 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:03.618 03:14:58 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:03.618 03:14:58 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:03.618 03:14:58 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:03.618 03:14:58 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.618 03:14:58 -- common/autotest_common.sh@1320 -- # shift 00:27:03.618 03:14:58 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:03.618 03:14:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:03.618 03:14:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:03.618 03:14:58 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:03.618 03:14:58 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:03.618 03:14:58 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:03.618 03:14:58 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:03.618 03:14:58 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:03.875 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:03.875 fio-3.35 00:27:03.875 Starting 1 thread 00:27:03.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:06.405 00:27:06.405 test: (groupid=0, jobs=1): err= 0: pid=2101903: Sun Jul 14 03:15:01 2024 00:27:06.405 read: IOPS=9556, BW=37.3MiB/s (39.1MB/s)(74.9MiB/2006msec) 00:27:06.405 slat (nsec): min=1880, max=179089, avg=2579.50, stdev=2024.46 00:27:06.405 clat (usec): min=2211, max=12657, avg=7409.36, stdev=570.97 00:27:06.405 lat (usec): min=2238, max=12659, avg=7411.94, stdev=570.84 00:27:06.405 clat percentiles (usec): 00:27:06.405 | 1.00th=[ 6128], 5.00th=[ 6521], 10.00th=[ 6718], 20.00th=[ 6980], 00:27:06.405 | 30.00th=[ 7111], 40.00th=[ 7308], 50.00th=[ 7439], 60.00th=[ 7570], 00:27:06.405 | 70.00th=[ 7701], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:27:06.405 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[10683], 99.95th=[11863], 00:27:06.405 | 99.99th=[12649] 00:27:06.405 bw ( KiB/s): min=37576, max=38568, per=99.92%, avg=38198.00, stdev=443.62, samples=4 00:27:06.405 iops : min= 9394, max= 9642, avg=9549.50, stdev=110.90, samples=4 00:27:06.405 write: IOPS=9565, BW=37.4MiB/s (39.2MB/s)(75.0MiB/2006msec); 0 zone resets 00:27:06.405 slat (usec): min=2, max=132, avg= 2.72, stdev= 1.56 00:27:06.405 clat (usec): min=1470, max=11596, avg=5937.74, stdev=500.44 00:27:06.405 lat (usec): min=1479, max=11599, avg=5940.46, stdev=500.40 00:27:06.405 clat percentiles (usec): 00:27:06.405 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:27:06.405 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:27:06.405 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:27:06.405 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 9372], 99.95th=[10552], 00:27:06.405 | 99.99th=[11600] 00:27:06.405 bw ( KiB/s): min=37568, max=38800, per=100.00%, avg=38266.00, stdev=518.27, samples=4 00:27:06.405 iops : min= 9392, max= 9700, avg=9566.50, stdev=129.57, samples=4 00:27:06.405 lat (msec) : 2=0.01%, 4=0.11%, 10=99.77%, 20=0.11% 00:27:06.405 cpu : usr=54.01%, sys=37.41%, ctx=63, majf=0, minf=32 00:27:06.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:06.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:06.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:06.405 issued rwts: total=19171,19188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:06.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:06.405 00:27:06.405 Run status group 0 (all jobs): 00:27:06.405 READ: bw=37.3MiB/s (39.1MB/s), 37.3MiB/s-37.3MiB/s (39.1MB/s-39.1MB/s), io=74.9MiB (78.5MB), run=2006-2006msec 00:27:06.405 WRITE: bw=37.4MiB/s (39.2MB/s), 37.4MiB/s-37.4MiB/s (39.2MB/s-39.2MB/s), io=75.0MiB (78.6MB), run=2006-2006msec 00:27:06.405 03:15:01 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:06.405 03:15:01 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:06.405 03:15:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:06.405 03:15:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:06.405 03:15:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:06.405 03:15:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.405 03:15:01 -- common/autotest_common.sh@1320 -- # shift 00:27:06.405 03:15:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:06.405 03:15:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:06.405 03:15:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:06.405 03:15:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:06.405 03:15:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:06.405 03:15:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:06.405 03:15:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:06.405 03:15:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:06.405 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:06.405 fio-3.35 00:27:06.405 Starting 1 thread 00:27:06.405 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.932 00:27:08.932 test: (groupid=0, jobs=1): err= 0: pid=2102350: Sun Jul 14 03:15:03 2024 00:27:08.932 read: IOPS=7769, BW=121MiB/s (127MB/s)(244MiB/2007msec) 00:27:08.932 slat (usec): min=2, max=106, avg= 3.71, stdev= 1.75 00:27:08.933 clat (usec): min=2433, max=20269, avg=9894.39, stdev=2694.90 00:27:08.933 lat (usec): min=2437, max=20273, avg=9898.10, stdev=2695.04 00:27:08.933 clat percentiles (usec): 00:27:08.933 | 1.00th=[ 4817], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 7439], 00:27:08.933 | 30.00th=[ 8291], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10421], 00:27:08.933 | 70.00th=[11338], 80.00th=[12256], 90.00th=[13304], 95.00th=[14615], 00:27:08.933 | 99.00th=[16581], 99.50th=[17695], 99.90th=[19006], 99.95th=[19530], 00:27:08.933 | 99.99th=[20317] 00:27:08.933 bw ( KiB/s): min=58848, max=67360, per=51.09%, avg=63512.00, stdev=3961.14, samples=4 00:27:08.933 iops : min= 3678, max= 4210, avg=3969.50, stdev=247.57, samples=4 00:27:08.933 write: IOPS=4588, BW=71.7MiB/s (75.2MB/s)(130MiB/1816msec); 0 zone resets 00:27:08.933 slat (usec): min=31, max=136, avg=34.33, stdev= 4.81 00:27:08.933 clat (usec): min=3006, max=18741, avg=11170.32, stdev=1996.82 00:27:08.933 lat (usec): min=3038, max=18774, avg=11204.65, stdev=1997.19 00:27:08.933 clat percentiles (usec): 00:27:08.933 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8717], 20.00th=[ 9372], 00:27:08.933 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10945], 60.00th=[11469], 00:27:08.933 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13960], 95.00th=[14746], 00:27:08.933 | 99.00th=[16188], 99.50th=[16712], 99.90th=[17957], 99.95th=[18220], 00:27:08.933 | 99.99th=[18744] 00:27:08.933 bw ( KiB/s): min=60704, max=70528, per=90.24%, avg=66248.00, stdev=4642.20, samples=4 00:27:08.933 iops : min= 3794, max= 4408, avg=4140.50, stdev=290.14, samples=4 00:27:08.933 lat (msec) : 4=0.23%, 10=45.37%, 20=54.37%, 50=0.03% 00:27:08.933 cpu : usr=73.78%, sys=22.23%, ctx=22, majf=0, minf=46 00:27:08.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:27:08.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:08.933 issued rwts: total=15593,8332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.933 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:08.933 00:27:08.933 Run status group 0 (all jobs): 00:27:08.933 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=244MiB (255MB), run=2007-2007msec 00:27:08.933 WRITE: bw=71.7MiB/s (75.2MB/s), 71.7MiB/s-71.7MiB/s (75.2MB/s-75.2MB/s), io=130MiB (137MB), run=1816-1816msec 00:27:08.933 03:15:03 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:08.933 03:15:04 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:08.933 03:15:04 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:08.933 03:15:04 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:08.933 03:15:04 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:08.933 03:15:04 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:08.933 03:15:04 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:08.933 03:15:04 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:08.933 03:15:04 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:09.190 03:15:04 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:09.190 03:15:04 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:09.190 03:15:04 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:12.469 Nvme0n1 00:27:12.469 03:15:07 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:14.997 03:15:10 -- host/fio.sh@53 -- # ls_guid=a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f 00:27:14.997 03:15:10 -- host/fio.sh@54 -- # get_lvs_free_mb a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f 00:27:14.997 03:15:10 -- common/autotest_common.sh@1343 -- # local lvs_uuid=a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f 00:27:14.997 03:15:10 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:14.997 03:15:10 -- common/autotest_common.sh@1345 -- # local fc 00:27:14.997 03:15:10 -- common/autotest_common.sh@1346 -- # local cs 00:27:14.997 03:15:10 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:15.255 03:15:10 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:15.255 { 00:27:15.255 "uuid": "a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f", 00:27:15.255 "name": "lvs_0", 00:27:15.255 "base_bdev": "Nvme0n1", 00:27:15.255 "total_data_clusters": 930, 00:27:15.255 "free_clusters": 930, 00:27:15.255 "block_size": 512, 00:27:15.255 "cluster_size": 1073741824 00:27:15.255 } 00:27:15.255 ]' 00:27:15.255 03:15:10 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f") .free_clusters' 00:27:15.255 03:15:10 -- common/autotest_common.sh@1348 -- # fc=930 00:27:15.255 03:15:10 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f") .cluster_size' 00:27:15.512 03:15:10 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:15.512 03:15:10 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:15.512 03:15:10 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:15.512 952320 00:27:15.512 03:15:10 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:15.769 ec8e85dd-0038-42bc-9a5f-f289c9248d1e 00:27:15.769 03:15:10 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:16.026 03:15:11 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:16.283 03:15:11 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:16.540 03:15:11 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.541 03:15:11 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.541 03:15:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:16.541 03:15:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:16.541 03:15:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:16.541 03:15:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.541 03:15:11 -- common/autotest_common.sh@1320 -- # shift 00:27:16.541 03:15:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:16.541 03:15:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.541 03:15:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.541 03:15:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:16.541 03:15:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:16.541 03:15:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:16.541 03:15:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:16.541 03:15:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:16.798 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:16.798 fio-3.35 00:27:16.798 Starting 1 thread 00:27:16.798 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.319 00:27:19.319 test: (groupid=0, jobs=1): err= 0: pid=2104188: Sun Jul 14 03:15:14 2024 00:27:19.319 read: IOPS=6334, BW=24.7MiB/s (25.9MB/s)(49.7MiB/2008msec) 00:27:19.319 slat (nsec): min=1870, max=147948, avg=2523.82, stdev=2278.86 00:27:19.319 clat (usec): min=1020, max=171121, avg=11151.10, stdev=11360.03 00:27:19.319 lat (usec): min=1022, max=171157, avg=11153.63, stdev=11360.29 00:27:19.319 clat percentiles (msec): 00:27:19.319 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:27:19.319 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:19.319 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:27:19.319 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:19.319 | 99.99th=[ 171] 00:27:19.319 bw ( KiB/s): min=17800, max=28080, per=99.87%, avg=25306.00, stdev=5008.53, samples=4 00:27:19.319 iops : min= 4450, max= 7020, avg=6326.50, stdev=1252.13, samples=4 00:27:19.319 write: IOPS=6331, BW=24.7MiB/s (25.9MB/s)(49.7MiB/2008msec); 0 zone resets 00:27:19.319 slat (usec): min=2, max=112, avg= 2.65, stdev= 1.82 00:27:19.319 clat (usec): min=376, max=169397, avg=8927.56, stdev=10673.61 00:27:19.319 lat (usec): min=379, max=169403, avg=8930.21, stdev=10673.85 00:27:19.319 clat percentiles (msec): 00:27:19.319 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 8], 00:27:19.319 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:27:19.319 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:27:19.319 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:27:19.319 | 99.99th=[ 169] 00:27:19.319 bw ( KiB/s): min=18856, max=27456, per=99.93%, avg=25306.00, stdev=4300.00, samples=4 00:27:19.319 iops : min= 4714, max= 6864, avg=6326.50, stdev=1075.00, samples=4 00:27:19.319 lat (usec) : 500=0.01%, 750=0.01% 00:27:19.319 lat (msec) : 2=0.03%, 4=0.16%, 10=66.17%, 20=33.12%, 250=0.50% 00:27:19.319 cpu : usr=55.51%, sys=38.96%, ctx=83, majf=0, minf=32 00:27:19.319 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:19.319 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:19.319 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:19.319 issued rwts: total=12720,12713,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:19.319 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:19.319 00:27:19.319 Run status group 0 (all jobs): 00:27:19.319 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.7MiB (52.1MB), run=2008-2008msec 00:27:19.319 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.7MiB (52.1MB), run=2008-2008msec 00:27:19.319 03:15:14 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:19.319 03:15:14 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:20.695 03:15:15 -- host/fio.sh@64 -- # ls_nested_guid=f946e79e-ffa1-473c-9ddd-0429ecda7f38 00:27:20.695 03:15:15 -- host/fio.sh@65 -- # get_lvs_free_mb f946e79e-ffa1-473c-9ddd-0429ecda7f38 00:27:20.695 03:15:15 -- common/autotest_common.sh@1343 -- # local lvs_uuid=f946e79e-ffa1-473c-9ddd-0429ecda7f38 00:27:20.695 03:15:15 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:20.695 03:15:15 -- common/autotest_common.sh@1345 -- # local fc 00:27:20.695 03:15:15 -- common/autotest_common.sh@1346 -- # local cs 00:27:20.695 03:15:15 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:20.695 03:15:15 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:20.695 { 00:27:20.695 "uuid": "a2b9adf7-b9c7-46c8-8d1f-91a8f9e7ef7f", 00:27:20.695 "name": "lvs_0", 00:27:20.695 "base_bdev": "Nvme0n1", 00:27:20.695 "total_data_clusters": 930, 00:27:20.695 "free_clusters": 0, 00:27:20.695 "block_size": 512, 00:27:20.695 "cluster_size": 1073741824 00:27:20.695 }, 00:27:20.695 { 00:27:20.695 "uuid": "f946e79e-ffa1-473c-9ddd-0429ecda7f38", 00:27:20.695 "name": "lvs_n_0", 00:27:20.695 "base_bdev": "ec8e85dd-0038-42bc-9a5f-f289c9248d1e", 00:27:20.695 "total_data_clusters": 237847, 00:27:20.695 "free_clusters": 237847, 00:27:20.695 "block_size": 512, 00:27:20.695 "cluster_size": 4194304 00:27:20.695 } 00:27:20.695 ]' 00:27:20.695 03:15:15 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="f946e79e-ffa1-473c-9ddd-0429ecda7f38") .free_clusters' 00:27:20.695 03:15:15 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:20.695 03:15:15 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="f946e79e-ffa1-473c-9ddd-0429ecda7f38") .cluster_size' 00:27:20.953 03:15:15 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:20.953 03:15:15 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:20.953 03:15:15 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:20.953 951388 00:27:20.953 03:15:15 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:21.550 6cad7adc-df9e-470e-9f3e-bd84feb6890b 00:27:21.550 03:15:16 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:21.813 03:15:16 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:22.072 03:15:17 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:22.330 03:15:17 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.330 03:15:17 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.330 03:15:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:22.330 03:15:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.330 03:15:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:22.330 03:15:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.330 03:15:17 -- common/autotest_common.sh@1320 -- # shift 00:27:22.330 03:15:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:22.330 03:15:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.330 03:15:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.330 03:15:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.330 03:15:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.330 03:15:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.330 03:15:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:22.330 03:15:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.330 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:22.330 fio-3.35 00:27:22.330 Starting 1 thread 00:27:22.588 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.113 00:27:25.113 test: (groupid=0, jobs=1): err= 0: pid=2104939: Sun Jul 14 03:15:19 2024 00:27:25.113 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(48.0MiB/2008msec) 00:27:25.113 slat (nsec): min=1866, max=195348, avg=2461.24, stdev=2403.18 00:27:25.113 clat (usec): min=4693, max=19132, avg=11586.52, stdev=967.11 00:27:25.113 lat (usec): min=4709, max=19134, avg=11588.98, stdev=967.05 00:27:25.113 clat percentiles (usec): 00:27:25.113 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:27:25.113 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:27:25.113 | 70.00th=[12125], 80.00th=[12387], 90.00th=[12780], 95.00th=[13173], 00:27:25.113 | 99.00th=[13698], 99.50th=[13960], 99.90th=[16581], 99.95th=[17957], 00:27:25.113 | 99.99th=[19006] 00:27:25.113 bw ( KiB/s): min=23184, max=24856, per=99.84%, avg=24414.00, stdev=820.96, samples=4 00:27:25.113 iops : min= 5796, max= 6214, avg=6103.50, stdev=205.24, samples=4 00:27:25.113 write: IOPS=6094, BW=23.8MiB/s (25.0MB/s)(47.8MiB/2008msec); 0 zone resets 00:27:25.113 slat (nsec): min=1989, max=143862, avg=2566.26, stdev=1649.10 00:27:25.113 clat (usec): min=2366, max=16511, avg=9205.97, stdev=866.50 00:27:25.113 lat (usec): min=2371, max=16513, avg=9208.53, stdev=866.51 00:27:25.113 clat percentiles (usec): 00:27:25.113 | 1.00th=[ 7242], 5.00th=[ 7898], 10.00th=[ 8160], 20.00th=[ 8586], 00:27:25.113 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9372], 00:27:25.113 | 70.00th=[ 9634], 80.00th=[ 9896], 90.00th=[10290], 95.00th=[10552], 00:27:25.113 | 99.00th=[11076], 99.50th=[11469], 99.90th=[15401], 99.95th=[15533], 00:27:25.113 | 99.99th=[16450] 00:27:25.113 bw ( KiB/s): min=24216, max=24512, per=99.92%, avg=24358.00, stdev=144.20, samples=4 00:27:25.114 iops : min= 6054, max= 6128, avg=6089.50, stdev=36.05, samples=4 00:27:25.114 lat (msec) : 4=0.04%, 10=44.18%, 20=55.78% 00:27:25.114 cpu : usr=53.76%, sys=40.91%, ctx=91, majf=0, minf=32 00:27:25.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:25.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:25.114 issued rwts: total=12276,12237,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:25.114 00:27:25.114 Run status group 0 (all jobs): 00:27:25.114 READ: bw=23.9MiB/s (25.0MB/s), 23.9MiB/s-23.9MiB/s (25.0MB/s-25.0MB/s), io=48.0MiB (50.3MB), run=2008-2008msec 00:27:25.114 WRITE: bw=23.8MiB/s (25.0MB/s), 23.8MiB/s-23.8MiB/s (25.0MB/s-25.0MB/s), io=47.8MiB (50.1MB), run=2008-2008msec 00:27:25.114 03:15:19 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:25.114 03:15:20 -- host/fio.sh@74 -- # sync 00:27:25.114 03:15:20 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:29.308 03:15:23 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:29.308 03:15:24 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:31.843 03:15:27 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:32.100 03:15:27 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:34.020 03:15:29 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:34.020 03:15:29 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:34.020 03:15:29 -- host/fio.sh@86 -- # nvmftestfini 00:27:34.020 03:15:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:34.020 03:15:29 -- nvmf/common.sh@116 -- # sync 00:27:34.020 03:15:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:34.020 03:15:29 -- nvmf/common.sh@119 -- # set +e 00:27:34.020 03:15:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:34.020 03:15:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:34.020 rmmod nvme_tcp 00:27:34.278 rmmod nvme_fabrics 00:27:34.278 rmmod nvme_keyring 00:27:34.278 03:15:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:34.278 03:15:29 -- nvmf/common.sh@123 -- # set -e 00:27:34.278 03:15:29 -- nvmf/common.sh@124 -- # return 0 00:27:34.278 03:15:29 -- nvmf/common.sh@477 -- # '[' -n 2101410 ']' 00:27:34.278 03:15:29 -- nvmf/common.sh@478 -- # killprocess 2101410 00:27:34.278 03:15:29 -- common/autotest_common.sh@926 -- # '[' -z 2101410 ']' 00:27:34.278 03:15:29 -- common/autotest_common.sh@930 -- # kill -0 2101410 00:27:34.278 03:15:29 -- common/autotest_common.sh@931 -- # uname 00:27:34.278 03:15:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:34.278 03:15:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2101410 00:27:34.278 03:15:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:34.278 03:15:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:34.278 03:15:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2101410' 00:27:34.278 killing process with pid 2101410 00:27:34.278 03:15:29 -- common/autotest_common.sh@945 -- # kill 2101410 00:27:34.278 03:15:29 -- common/autotest_common.sh@950 -- # wait 2101410 00:27:34.538 03:15:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:34.538 03:15:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:34.538 03:15:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:34.538 03:15:29 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:34.538 03:15:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:34.538 03:15:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:34.538 03:15:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:34.538 03:15:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.443 03:15:31 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:36.443 00:27:36.443 real 0m37.528s 00:27:36.443 user 2m21.845s 00:27:36.443 sys 0m7.600s 00:27:36.443 03:15:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:36.443 03:15:31 -- common/autotest_common.sh@10 -- # set +x 00:27:36.443 ************************************ 00:27:36.443 END TEST nvmf_fio_host 00:27:36.443 ************************************ 00:27:36.443 03:15:31 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:36.443 03:15:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:36.443 03:15:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:36.443 03:15:31 -- common/autotest_common.sh@10 -- # set +x 00:27:36.443 ************************************ 00:27:36.443 START TEST nvmf_failover 00:27:36.443 ************************************ 00:27:36.443 03:15:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:36.702 * Looking for test storage... 00:27:36.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:36.702 03:15:31 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:36.702 03:15:31 -- nvmf/common.sh@7 -- # uname -s 00:27:36.702 03:15:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:36.702 03:15:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:36.702 03:15:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:36.702 03:15:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:36.702 03:15:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:36.702 03:15:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:36.702 03:15:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:36.702 03:15:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:36.702 03:15:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:36.702 03:15:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:36.702 03:15:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.702 03:15:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:36.702 03:15:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:36.702 03:15:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:36.702 03:15:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:36.702 03:15:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:36.702 03:15:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:36.702 03:15:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:36.702 03:15:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:36.702 03:15:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.702 03:15:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.702 03:15:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.702 03:15:31 -- paths/export.sh@5 -- # export PATH 00:27:36.702 03:15:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:36.702 03:15:31 -- nvmf/common.sh@46 -- # : 0 00:27:36.702 03:15:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:36.702 03:15:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:36.702 03:15:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:36.702 03:15:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:36.702 03:15:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:36.702 03:15:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:36.702 03:15:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:36.702 03:15:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:36.702 03:15:31 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:36.702 03:15:31 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:36.702 03:15:31 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:36.702 03:15:31 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:36.702 03:15:31 -- host/failover.sh@18 -- # nvmftestinit 00:27:36.702 03:15:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:36.702 03:15:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:36.702 03:15:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:36.702 03:15:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:36.702 03:15:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:36.702 03:15:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.702 03:15:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.702 03:15:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:36.702 03:15:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:36.702 03:15:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:36.702 03:15:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:36.702 03:15:31 -- common/autotest_common.sh@10 -- # set +x 00:27:38.607 03:15:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:38.607 03:15:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:38.607 03:15:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:38.607 03:15:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:38.607 03:15:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:38.607 03:15:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:38.607 03:15:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:38.607 03:15:33 -- nvmf/common.sh@294 -- # net_devs=() 00:27:38.607 03:15:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:38.607 03:15:33 -- nvmf/common.sh@295 -- # e810=() 00:27:38.607 03:15:33 -- nvmf/common.sh@295 -- # local -ga e810 00:27:38.607 03:15:33 -- nvmf/common.sh@296 -- # x722=() 00:27:38.607 03:15:33 -- nvmf/common.sh@296 -- # local -ga x722 00:27:38.607 03:15:33 -- nvmf/common.sh@297 -- # mlx=() 00:27:38.607 03:15:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:38.607 03:15:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.607 03:15:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.607 03:15:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:38.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:38.607 03:15:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:38.607 03:15:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:38.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:38.607 03:15:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.607 03:15:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.607 03:15:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.607 03:15:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:38.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:38.607 03:15:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:38.607 03:15:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.607 03:15:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.607 03:15:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:38.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:38.607 03:15:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:38.607 03:15:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:38.607 03:15:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.607 03:15:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.607 03:15:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:38.607 03:15:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.607 03:15:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.607 03:15:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:38.607 03:15:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.607 03:15:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.607 03:15:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:38.607 03:15:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:38.607 03:15:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.607 03:15:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.607 03:15:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.607 03:15:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.607 03:15:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:38.607 03:15:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.607 03:15:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.607 03:15:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.607 03:15:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:38.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:27:38.607 00:27:38.607 --- 10.0.0.2 ping statistics --- 00:27:38.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.607 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:38.607 03:15:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:27:38.607 00:27:38.607 --- 10.0.0.1 ping statistics --- 00:27:38.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.607 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:27:38.607 03:15:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.607 03:15:33 -- nvmf/common.sh@410 -- # return 0 00:27:38.607 03:15:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:38.607 03:15:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.607 03:15:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:38.607 03:15:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.608 03:15:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:38.608 03:15:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:38.608 03:15:33 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:38.608 03:15:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:38.608 03:15:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:38.608 03:15:33 -- common/autotest_common.sh@10 -- # set +x 00:27:38.608 03:15:33 -- nvmf/common.sh@469 -- # nvmfpid=2108282 00:27:38.608 03:15:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:38.608 03:15:33 -- nvmf/common.sh@470 -- # waitforlisten 2108282 00:27:38.608 03:15:33 -- common/autotest_common.sh@819 -- # '[' -z 2108282 ']' 00:27:38.608 03:15:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.608 03:15:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:38.608 03:15:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.608 03:15:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:38.608 03:15:33 -- common/autotest_common.sh@10 -- # set +x 00:27:38.608 [2024-07-14 03:15:33.835681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:38.608 [2024-07-14 03:15:33.835756] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.867 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.867 [2024-07-14 03:15:33.903334] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:38.867 [2024-07-14 03:15:33.986691] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:38.867 [2024-07-14 03:15:33.986834] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.867 [2024-07-14 03:15:33.986851] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.867 [2024-07-14 03:15:33.986863] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.867 [2024-07-14 03:15:33.986932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:38.867 [2024-07-14 03:15:33.986999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:38.867 [2024-07-14 03:15:33.987004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.803 03:15:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.803 03:15:34 -- common/autotest_common.sh@852 -- # return 0 00:27:39.803 03:15:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:39.803 03:15:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.803 03:15:34 -- common/autotest_common.sh@10 -- # set +x 00:27:39.803 03:15:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:39.803 03:15:34 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:40.061 [2024-07-14 03:15:35.142618] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.061 03:15:35 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:40.352 Malloc0 00:27:40.352 03:15:35 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:40.645 03:15:35 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:40.902 03:15:35 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:40.903 [2024-07-14 03:15:36.141517] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.161 03:15:36 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:41.161 [2024-07-14 03:15:36.374285] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:41.161 03:15:36 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:41.419 [2024-07-14 03:15:36.607047] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:41.419 03:15:36 -- host/failover.sh@31 -- # bdevperf_pid=2108671 00:27:41.419 03:15:36 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:41.419 03:15:36 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.419 03:15:36 -- host/failover.sh@34 -- # waitforlisten 2108671 /var/tmp/bdevperf.sock 00:27:41.419 03:15:36 -- common/autotest_common.sh@819 -- # '[' -z 2108671 ']' 00:27:41.419 03:15:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:41.419 03:15:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:41.419 03:15:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:41.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:41.419 03:15:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:41.419 03:15:36 -- common/autotest_common.sh@10 -- # set +x 00:27:42.795 03:15:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:42.795 03:15:37 -- common/autotest_common.sh@852 -- # return 0 00:27:42.795 03:15:37 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:42.795 NVMe0n1 00:27:42.795 03:15:38 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:43.364 00:27:43.364 03:15:38 -- host/failover.sh@39 -- # run_test_pid=2108876 00:27:43.364 03:15:38 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:43.364 03:15:38 -- host/failover.sh@41 -- # sleep 1 00:27:44.304 03:15:39 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.563 [2024-07-14 03:15:39.567096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 [2024-07-14 03:15:39.567719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b38d0 is same with the state(5) to be set 00:27:44.563 03:15:39 -- host/failover.sh@45 -- # sleep 3 00:27:47.854 03:15:42 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:47.854 00:27:47.854 03:15:43 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:48.115 [2024-07-14 03:15:43.319306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.115 [2024-07-14 03:15:43.319373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.115 [2024-07-14 03:15:43.319397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.115 [2024-07-14 03:15:43.319409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.115 [2024-07-14 03:15:43.319420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319640] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319710] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.319996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 [2024-07-14 03:15:43.320080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b4e20 is same with the state(5) to be set 00:27:48.116 03:15:43 -- host/failover.sh@50 -- # sleep 3 00:27:51.400 03:15:46 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:51.400 [2024-07-14 03:15:46.585506] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.400 03:15:46 -- host/failover.sh@55 -- # sleep 1 00:27:52.779 03:15:47 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:52.779 [2024-07-14 03:15:47.867766] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.779 [2024-07-14 03:15:47.867835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.779 [2024-07-14 03:15:47.867860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.867989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868157] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868169] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 [2024-07-14 03:15:47.868546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b5f00 is same with the state(5) to be set 00:27:52.780 03:15:47 -- host/failover.sh@59 -- # wait 2108876 00:27:59.354 0 00:27:59.354 03:15:53 -- host/failover.sh@61 -- # killprocess 2108671 00:27:59.354 03:15:53 -- common/autotest_common.sh@926 -- # '[' -z 2108671 ']' 00:27:59.354 03:15:53 -- common/autotest_common.sh@930 -- # kill -0 2108671 00:27:59.354 03:15:53 -- common/autotest_common.sh@931 -- # uname 00:27:59.354 03:15:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:59.354 03:15:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2108671 00:27:59.354 03:15:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:59.354 03:15:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:59.354 03:15:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2108671' 00:27:59.354 killing process with pid 2108671 00:27:59.354 03:15:53 -- common/autotest_common.sh@945 -- # kill 2108671 00:27:59.354 03:15:53 -- common/autotest_common.sh@950 -- # wait 2108671 00:27:59.354 03:15:53 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:59.354 [2024-07-14 03:15:36.666919] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:27:59.354 [2024-07-14 03:15:36.667007] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2108671 ] 00:27:59.354 EAL: No free 2048 kB hugepages reported on node 1 00:27:59.354 [2024-07-14 03:15:36.725871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.354 [2024-07-14 03:15:36.809591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.354 Running I/O for 15 seconds... 00:27:59.354 [2024-07-14 03:15:39.568091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:118464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:118472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.354 [2024-07-14 03:15:39.568657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.354 [2024-07-14 03:15:39.568670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.568982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.355 [2024-07-14 03:15:39.569308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.355 [2024-07-14 03:15:39.569366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.355 [2024-07-14 03:15:39.569694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.355 [2024-07-14 03:15:39.569723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.355 [2024-07-14 03:15:39.569750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.355 [2024-07-14 03:15:39.569947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.355 [2024-07-14 03:15:39.569963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.569977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.569996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:118808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:118344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:118376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:118416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.570980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.570996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.571009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:118944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.356 [2024-07-14 03:15:39.571098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:118968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:118976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:118984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.356 [2024-07-14 03:15:39.571290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.356 [2024-07-14 03:15:39.571303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:119008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:118448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:118504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:119072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.357 [2024-07-14 03:15:39.571797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.571973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.571988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.572003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:39.572032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b1710 is same with the state(5) to be set 00:27:59.357 [2024-07-14 03:15:39.572064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.357 [2024-07-14 03:15:39.572076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.357 [2024-07-14 03:15:39.572088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118672 len:8 PRP1 0x0 PRP2 0x0 00:27:59.357 [2024-07-14 03:15:39.572102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572159] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b1710 was disconnected and freed. reset controller. 00:27:59.357 [2024-07-14 03:15:39.572198] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:59.357 [2024-07-14 03:15:39.572234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.357 [2024-07-14 03:15:39.572267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.357 [2024-07-14 03:15:39.572297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.357 [2024-07-14 03:15:39.572324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.357 [2024-07-14 03:15:39.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:39.572365] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.357 [2024-07-14 03:15:39.574586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.357 [2024-07-14 03:15:39.574624] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8920a0 (9): Bad file descriptor 00:27:59.357 [2024-07-14 03:15:39.609244] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:59.357 [2024-07-14 03:15:43.320252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:127048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.357 [2024-07-14 03:15:43.320539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.357 [2024-07-14 03:15:43.320554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:127128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:127136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.320974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.320989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:127312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.358 [2024-07-14 03:15:43.321428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:127344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:127360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:127368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.358 [2024-07-14 03:15:43.321545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.358 [2024-07-14 03:15:43.321592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.358 [2024-07-14 03:15:43.321606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.321802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.321983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.321998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:127448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:127456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:126968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:126984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.359 [2024-07-14 03:15:43.322775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:127512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:127528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.359 [2024-07-14 03:15:43.322891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.359 [2024-07-14 03:15:43.322908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.322922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.322938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:127544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.322952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.322967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.322980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.322998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:127560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:127568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:127040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:127104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:127120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:127160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:127712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:127720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.360 [2024-07-14 03:15:43.323908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.323975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.323990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:127184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:127232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:127248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.360 [2024-07-14 03:15:43.324168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.360 [2024-07-14 03:15:43.324199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x89e510 is same with the state(5) to be set 00:27:59.360 [2024-07-14 03:15:43.324216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.360 [2024-07-14 03:15:43.324228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.360 [2024-07-14 03:15:43.324245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:127320 len:8 PRP1 0x0 PRP2 0x0 00:27:59.360 [2024-07-14 03:15:43.324258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:43.324324] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89e510 was disconnected and freed. reset controller. 00:27:59.361 [2024-07-14 03:15:43.324342] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:59.361 [2024-07-14 03:15:43.324389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.361 [2024-07-14 03:15:43.324408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:43.324424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.361 [2024-07-14 03:15:43.324437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:43.324451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.361 [2024-07-14 03:15:43.324464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:43.324478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.361 [2024-07-14 03:15:43.324491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:43.324505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.361 [2024-07-14 03:15:43.326666] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.361 [2024-07-14 03:15:43.326705] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8920a0 (9): Bad file descriptor 00:27:59.361 [2024-07-14 03:15:43.400942] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:59.361 [2024-07-14 03:15:47.868755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:107296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.868799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.868829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.868860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.868886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.868912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.868928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.868942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.868963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.868979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.868995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:107312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:107320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:107336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:107344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:107360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:107368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:107400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:107408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:107432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:107480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:107488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:107496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.361 [2024-07-14 03:15:47.869927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.361 [2024-07-14 03:15:47.869941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.869957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.869986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:107568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:107624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:107640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:107656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:107688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:107104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:107120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:107128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:107136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:107704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:107712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.362 [2024-07-14 03:15:47.870967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.870982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:107720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.870996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.871010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:107728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.871024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:107736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.362 [2024-07-14 03:15:47.871052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.362 [2024-07-14 03:15:47.871067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:107768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:107800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:107824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:107160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:107176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:107192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:107208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:107240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:107256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:107272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:107832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:107848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:107872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:107880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:107904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.871957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.871972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:107920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.871987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:107328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:107352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:107376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:107392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:107424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:107448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.872273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.872302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.363 [2024-07-14 03:15:47.872330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.363 [2024-07-14 03:15:47.872344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:107952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.363 [2024-07-14 03:15:47.872357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:107960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.364 [2024-07-14 03:15:47.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.364 [2024-07-14 03:15:47.872416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:107976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.364 [2024-07-14 03:15:47.872443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:59.364 [2024-07-14 03:15:47.872471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:107456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:107464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:107472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:107504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:107512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:107520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:107528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.364 [2024-07-14 03:15:47.872684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872698] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8b39c0 is same with the state(5) to be set 00:27:59.364 [2024-07-14 03:15:47.872715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:59.364 [2024-07-14 03:15:47.872725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:59.364 [2024-07-14 03:15:47.872743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:107536 len:8 PRP1 0x0 PRP2 0x0 00:27:59.364 [2024-07-14 03:15:47.872756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872816] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x8b39c0 was disconnected and freed. reset controller. 00:27:59.364 [2024-07-14 03:15:47.872838] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:59.364 [2024-07-14 03:15:47.872892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.364 [2024-07-14 03:15:47.872923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.364 [2024-07-14 03:15:47.872952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.364 [2024-07-14 03:15:47.872979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.872993] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:59.364 [2024-07-14 03:15:47.873007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:59.364 [2024-07-14 03:15:47.873020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:59.364 [2024-07-14 03:15:47.873070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8920a0 (9): Bad file descriptor 00:27:59.364 [2024-07-14 03:15:47.875307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:59.364 [2024-07-14 03:15:47.908412] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:59.364 00:27:59.364 Latency(us) 00:27:59.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.364 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:59.364 Verification LBA range: start 0x0 length 0x4000 00:27:59.364 NVMe0n1 : 15.01 13039.97 50.94 543.57 0.00 9406.71 873.81 16117.00 00:27:59.364 =================================================================================================================== 00:27:59.364 Total : 13039.97 50.94 543.57 0.00 9406.71 873.81 16117.00 00:27:59.364 Received shutdown signal, test time was about 15.000000 seconds 00:27:59.364 00:27:59.364 Latency(us) 00:27:59.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:59.364 =================================================================================================================== 00:27:59.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:59.364 03:15:53 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:59.364 03:15:53 -- host/failover.sh@65 -- # count=3 00:27:59.364 03:15:53 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:59.364 03:15:53 -- host/failover.sh@73 -- # bdevperf_pid=2110716 00:27:59.364 03:15:53 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:59.364 03:15:53 -- host/failover.sh@75 -- # waitforlisten 2110716 /var/tmp/bdevperf.sock 00:27:59.364 03:15:53 -- common/autotest_common.sh@819 -- # '[' -z 2110716 ']' 00:27:59.364 03:15:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:59.364 03:15:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:59.364 03:15:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:59.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:59.364 03:15:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:59.364 03:15:53 -- common/autotest_common.sh@10 -- # set +x 00:27:59.621 03:15:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:59.621 03:15:54 -- common/autotest_common.sh@852 -- # return 0 00:27:59.621 03:15:54 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:59.879 [2024-07-14 03:15:54.927434] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.879 03:15:54 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:00.136 [2024-07-14 03:15:55.160093] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:00.136 03:15:55 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.392 NVMe0n1 00:28:00.392 03:15:55 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:00.649 00:28:00.649 03:15:55 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.215 00:28:01.215 03:15:56 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:01.215 03:15:56 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:01.215 03:15:56 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.475 03:15:56 -- host/failover.sh@87 -- # sleep 3 00:28:04.757 03:15:59 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.757 03:15:59 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:04.757 03:15:59 -- host/failover.sh@90 -- # run_test_pid=2111527 00:28:04.757 03:15:59 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:04.757 03:15:59 -- host/failover.sh@92 -- # wait 2111527 00:28:06.167 0 00:28:06.167 03:16:01 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:06.167 [2024-07-14 03:15:53.787146] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:06.167 [2024-07-14 03:15:53.787253] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110716 ] 00:28:06.167 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.167 [2024-07-14 03:15:53.847141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.167 [2024-07-14 03:15:53.928481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.167 [2024-07-14 03:15:56.629319] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:06.167 [2024-07-14 03:15:56.629409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.167 [2024-07-14 03:15:56.629432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.167 [2024-07-14 03:15:56.629448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.167 [2024-07-14 03:15:56.629476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.167 [2024-07-14 03:15:56.629489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.167 [2024-07-14 03:15:56.629503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.167 [2024-07-14 03:15:56.629517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:06.167 [2024-07-14 03:15:56.629530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:06.167 [2024-07-14 03:15:56.629545] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:06.167 [2024-07-14 03:15:56.629592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:06.167 [2024-07-14 03:15:56.629634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9e0a0 (9): Bad file descriptor 00:28:06.167 [2024-07-14 03:15:56.682658] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:06.167 Running I/O for 1 seconds... 00:28:06.167 00:28:06.167 Latency(us) 00:28:06.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.167 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:06.167 Verification LBA range: start 0x0 length 0x4000 00:28:06.167 NVMe0n1 : 1.01 11325.71 44.24 0.00 0.00 11252.18 1601.99 18738.44 00:28:06.167 =================================================================================================================== 00:28:06.167 Total : 11325.71 44.24 0.00 0.00 11252.18 1601.99 18738.44 00:28:06.167 03:16:01 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.167 03:16:01 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:06.167 03:16:01 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.425 03:16:01 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:06.425 03:16:01 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:06.682 03:16:01 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.941 03:16:01 -- host/failover.sh@101 -- # sleep 3 00:28:10.226 03:16:04 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.226 03:16:04 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:10.226 03:16:05 -- host/failover.sh@108 -- # killprocess 2110716 00:28:10.226 03:16:05 -- common/autotest_common.sh@926 -- # '[' -z 2110716 ']' 00:28:10.226 03:16:05 -- common/autotest_common.sh@930 -- # kill -0 2110716 00:28:10.226 03:16:05 -- common/autotest_common.sh@931 -- # uname 00:28:10.226 03:16:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:10.226 03:16:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2110716 00:28:10.226 03:16:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:10.226 03:16:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:10.226 03:16:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2110716' 00:28:10.226 killing process with pid 2110716 00:28:10.226 03:16:05 -- common/autotest_common.sh@945 -- # kill 2110716 00:28:10.226 03:16:05 -- common/autotest_common.sh@950 -- # wait 2110716 00:28:10.226 03:16:05 -- host/failover.sh@110 -- # sync 00:28:10.226 03:16:05 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:10.490 03:16:05 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:10.490 03:16:05 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:10.490 03:16:05 -- host/failover.sh@116 -- # nvmftestfini 00:28:10.490 03:16:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:10.490 03:16:05 -- nvmf/common.sh@116 -- # sync 00:28:10.490 03:16:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:10.490 03:16:05 -- nvmf/common.sh@119 -- # set +e 00:28:10.490 03:16:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:10.490 03:16:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:10.490 rmmod nvme_tcp 00:28:10.490 rmmod nvme_fabrics 00:28:10.749 rmmod nvme_keyring 00:28:10.749 03:16:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:10.750 03:16:05 -- nvmf/common.sh@123 -- # set -e 00:28:10.750 03:16:05 -- nvmf/common.sh@124 -- # return 0 00:28:10.750 03:16:05 -- nvmf/common.sh@477 -- # '[' -n 2108282 ']' 00:28:10.750 03:16:05 -- nvmf/common.sh@478 -- # killprocess 2108282 00:28:10.750 03:16:05 -- common/autotest_common.sh@926 -- # '[' -z 2108282 ']' 00:28:10.750 03:16:05 -- common/autotest_common.sh@930 -- # kill -0 2108282 00:28:10.750 03:16:05 -- common/autotest_common.sh@931 -- # uname 00:28:10.750 03:16:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:10.750 03:16:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2108282 00:28:10.750 03:16:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:10.750 03:16:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:10.750 03:16:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2108282' 00:28:10.750 killing process with pid 2108282 00:28:10.750 03:16:05 -- common/autotest_common.sh@945 -- # kill 2108282 00:28:10.750 03:16:05 -- common/autotest_common.sh@950 -- # wait 2108282 00:28:11.007 03:16:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:11.007 03:16:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:11.007 03:16:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:11.007 03:16:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:11.007 03:16:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:11.007 03:16:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:11.007 03:16:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:11.007 03:16:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.908 03:16:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:12.908 00:28:12.908 real 0m36.466s 00:28:12.908 user 2m6.697s 00:28:12.908 sys 0m6.909s 00:28:12.908 03:16:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.908 03:16:08 -- common/autotest_common.sh@10 -- # set +x 00:28:12.908 ************************************ 00:28:12.908 END TEST nvmf_failover 00:28:12.908 ************************************ 00:28:12.908 03:16:08 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:12.908 03:16:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:12.908 03:16:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:12.908 03:16:08 -- common/autotest_common.sh@10 -- # set +x 00:28:12.908 ************************************ 00:28:12.908 START TEST nvmf_discovery 00:28:12.908 ************************************ 00:28:12.908 03:16:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:13.166 * Looking for test storage... 00:28:13.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:13.166 03:16:08 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:13.166 03:16:08 -- nvmf/common.sh@7 -- # uname -s 00:28:13.166 03:16:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:13.166 03:16:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:13.166 03:16:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:13.166 03:16:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:13.166 03:16:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:13.166 03:16:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:13.166 03:16:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:13.166 03:16:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:13.166 03:16:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:13.166 03:16:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:13.166 03:16:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.166 03:16:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:13.166 03:16:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:13.166 03:16:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:13.166 03:16:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:13.166 03:16:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:13.166 03:16:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:13.166 03:16:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:13.166 03:16:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:13.166 03:16:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.166 03:16:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.166 03:16:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.166 03:16:08 -- paths/export.sh@5 -- # export PATH 00:28:13.166 03:16:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:13.166 03:16:08 -- nvmf/common.sh@46 -- # : 0 00:28:13.166 03:16:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:13.166 03:16:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:13.166 03:16:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:13.166 03:16:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:13.166 03:16:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:13.166 03:16:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:13.166 03:16:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:13.166 03:16:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:13.166 03:16:08 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:13.166 03:16:08 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:13.166 03:16:08 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:13.166 03:16:08 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:13.166 03:16:08 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:13.166 03:16:08 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:13.166 03:16:08 -- host/discovery.sh@25 -- # nvmftestinit 00:28:13.166 03:16:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:13.166 03:16:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.166 03:16:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:13.166 03:16:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:13.166 03:16:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:13.166 03:16:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.166 03:16:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:13.166 03:16:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.166 03:16:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:13.166 03:16:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:13.166 03:16:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:13.166 03:16:08 -- common/autotest_common.sh@10 -- # set +x 00:28:15.075 03:16:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:15.075 03:16:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:15.075 03:16:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:15.075 03:16:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:15.075 03:16:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:15.075 03:16:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:15.075 03:16:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:15.075 03:16:10 -- nvmf/common.sh@294 -- # net_devs=() 00:28:15.075 03:16:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:15.075 03:16:10 -- nvmf/common.sh@295 -- # e810=() 00:28:15.075 03:16:10 -- nvmf/common.sh@295 -- # local -ga e810 00:28:15.075 03:16:10 -- nvmf/common.sh@296 -- # x722=() 00:28:15.075 03:16:10 -- nvmf/common.sh@296 -- # local -ga x722 00:28:15.075 03:16:10 -- nvmf/common.sh@297 -- # mlx=() 00:28:15.075 03:16:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:15.075 03:16:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.076 03:16:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:15.076 03:16:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:15.076 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:15.076 03:16:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:15.076 03:16:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:15.076 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:15.076 03:16:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:15.076 03:16:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.076 03:16:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.076 03:16:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:15.076 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:15.076 03:16:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:15.076 03:16:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.076 03:16:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.076 03:16:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:15.076 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:15.076 03:16:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:15.076 03:16:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:15.076 03:16:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.076 03:16:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.076 03:16:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:15.076 03:16:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.076 03:16:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.076 03:16:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:15.076 03:16:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.076 03:16:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.076 03:16:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:15.076 03:16:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:15.076 03:16:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.076 03:16:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.076 03:16:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.076 03:16:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.076 03:16:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:15.076 03:16:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.076 03:16:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.076 03:16:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.076 03:16:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:15.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:28:15.076 00:28:15.076 --- 10.0.0.2 ping statistics --- 00:28:15.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.076 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:15.076 03:16:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:28:15.076 00:28:15.076 --- 10.0.0.1 ping statistics --- 00:28:15.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.076 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:28:15.076 03:16:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.076 03:16:10 -- nvmf/common.sh@410 -- # return 0 00:28:15.076 03:16:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:15.076 03:16:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.076 03:16:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:15.076 03:16:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.076 03:16:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:15.076 03:16:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:15.076 03:16:10 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:15.076 03:16:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:15.076 03:16:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:15.076 03:16:10 -- common/autotest_common.sh@10 -- # set +x 00:28:15.076 03:16:10 -- nvmf/common.sh@469 -- # nvmfpid=2114177 00:28:15.076 03:16:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:15.076 03:16:10 -- nvmf/common.sh@470 -- # waitforlisten 2114177 00:28:15.076 03:16:10 -- common/autotest_common.sh@819 -- # '[' -z 2114177 ']' 00:28:15.076 03:16:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.076 03:16:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:15.076 03:16:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.076 03:16:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:15.076 03:16:10 -- common/autotest_common.sh@10 -- # set +x 00:28:15.076 [2024-07-14 03:16:10.260472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:15.076 [2024-07-14 03:16:10.260559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.076 EAL: No free 2048 kB hugepages reported on node 1 00:28:15.076 [2024-07-14 03:16:10.327717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.336 [2024-07-14 03:16:10.411338] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:15.336 [2024-07-14 03:16:10.411505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.336 [2024-07-14 03:16:10.411523] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.336 [2024-07-14 03:16:10.411535] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.336 [2024-07-14 03:16:10.411561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.273 03:16:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:16.273 03:16:11 -- common/autotest_common.sh@852 -- # return 0 00:28:16.273 03:16:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:16.273 03:16:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 03:16:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.273 03:16:11 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:16.273 03:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 [2024-07-14 03:16:11.268074] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.273 03:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.273 03:16:11 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:16.273 03:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 [2024-07-14 03:16:11.276229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:16.273 03:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.273 03:16:11 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:16.273 03:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 null0 00:28:16.273 03:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.273 03:16:11 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:16.273 03:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 null1 00:28:16.273 03:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.273 03:16:11 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:16.273 03:16:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 03:16:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:16.273 03:16:11 -- host/discovery.sh@45 -- # hostpid=2114330 00:28:16.273 03:16:11 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:16.273 03:16:11 -- host/discovery.sh@46 -- # waitforlisten 2114330 /tmp/host.sock 00:28:16.273 03:16:11 -- common/autotest_common.sh@819 -- # '[' -z 2114330 ']' 00:28:16.273 03:16:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:16.273 03:16:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:16.273 03:16:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:16.273 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:16.273 03:16:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:16.273 03:16:11 -- common/autotest_common.sh@10 -- # set +x 00:28:16.273 [2024-07-14 03:16:11.344561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:16.273 [2024-07-14 03:16:11.344626] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114330 ] 00:28:16.273 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.273 [2024-07-14 03:16:11.404576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.273 [2024-07-14 03:16:11.493264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:16.273 [2024-07-14 03:16:11.493441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.210 03:16:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:17.210 03:16:12 -- common/autotest_common.sh@852 -- # return 0 00:28:17.210 03:16:12 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:17.210 03:16:12 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.210 03:16:12 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.210 03:16:12 -- host/discovery.sh@72 -- # notify_id=0 00:28:17.210 03:16:12 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # sort 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # xargs 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.210 03:16:12 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:17.210 03:16:12 -- host/discovery.sh@79 -- # get_bdev_list 00:28:17.210 03:16:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- host/discovery.sh@55 -- # sort 00:28:17.210 03:16:12 -- host/discovery.sh@55 -- # xargs 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.210 03:16:12 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:17.210 03:16:12 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.210 03:16:12 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.210 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.210 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # sort 00:28:17.210 03:16:12 -- host/discovery.sh@59 -- # xargs 00:28:17.210 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:17.469 03:16:12 -- host/discovery.sh@83 -- # get_bdev_list 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.469 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.469 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # sort 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # xargs 00:28:17.469 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:17.469 03:16:12 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:17.469 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.469 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.469 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:17.469 03:16:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.469 03:16:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.469 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.469 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.469 03:16:12 -- host/discovery.sh@59 -- # sort 00:28:17.469 03:16:12 -- host/discovery.sh@59 -- # xargs 00:28:17.469 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:17.469 03:16:12 -- host/discovery.sh@87 -- # get_bdev_list 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.469 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.469 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # sort 00:28:17.469 03:16:12 -- host/discovery.sh@55 -- # xargs 00:28:17.469 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:17.469 03:16:12 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:17.469 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.469 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.469 [2024-07-14 03:16:12.595831] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.469 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.469 03:16:12 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:17.469 03:16:12 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:17.470 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.470 03:16:12 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:17.470 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 03:16:12 -- host/discovery.sh@59 -- # sort 00:28:17.470 03:16:12 -- host/discovery.sh@59 -- # xargs 00:28:17.470 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.470 03:16:12 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:17.470 03:16:12 -- host/discovery.sh@93 -- # get_bdev_list 00:28:17.470 03:16:12 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.470 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.470 03:16:12 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:17.470 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 03:16:12 -- host/discovery.sh@55 -- # sort 00:28:17.470 03:16:12 -- host/discovery.sh@55 -- # xargs 00:28:17.470 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.470 03:16:12 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:17.470 03:16:12 -- host/discovery.sh@94 -- # get_notification_count 00:28:17.470 03:16:12 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:17.470 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.470 03:16:12 -- host/discovery.sh@74 -- # jq '. | length' 00:28:17.470 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.470 03:16:12 -- host/discovery.sh@74 -- # notification_count=0 00:28:17.470 03:16:12 -- host/discovery.sh@75 -- # notify_id=0 00:28:17.470 03:16:12 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:17.470 03:16:12 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:17.470 03:16:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.470 03:16:12 -- common/autotest_common.sh@10 -- # set +x 00:28:17.470 03:16:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.470 03:16:12 -- host/discovery.sh@100 -- # sleep 1 00:28:18.409 [2024-07-14 03:16:13.388830] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:18.409 [2024-07-14 03:16:13.388863] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:18.409 [2024-07-14 03:16:13.388901] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:18.409 [2024-07-14 03:16:13.516323] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:18.409 [2024-07-14 03:16:13.619189] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:18.409 [2024-07-14 03:16:13.619223] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:18.667 03:16:13 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:18.667 03:16:13 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:18.667 03:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.667 03:16:13 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:18.667 03:16:13 -- host/discovery.sh@59 -- # sort 00:28:18.667 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.667 03:16:13 -- host/discovery.sh@59 -- # xargs 00:28:18.667 03:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@102 -- # get_bdev_list 00:28:18.667 03:16:13 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:18.667 03:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.667 03:16:13 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:18.667 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.667 03:16:13 -- host/discovery.sh@55 -- # sort 00:28:18.667 03:16:13 -- host/discovery.sh@55 -- # xargs 00:28:18.667 03:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:18.667 03:16:13 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:18.667 03:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.667 03:16:13 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:18.667 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.667 03:16:13 -- host/discovery.sh@63 -- # sort -n 00:28:18.667 03:16:13 -- host/discovery.sh@63 -- # xargs 00:28:18.667 03:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@104 -- # get_notification_count 00:28:18.667 03:16:13 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:18.667 03:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.667 03:16:13 -- host/discovery.sh@74 -- # jq '. | length' 00:28:18.667 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.667 03:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@74 -- # notification_count=1 00:28:18.667 03:16:13 -- host/discovery.sh@75 -- # notify_id=1 00:28:18.667 03:16:13 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:18.667 03:16:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:18.667 03:16:13 -- common/autotest_common.sh@10 -- # set +x 00:28:18.667 03:16:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.667 03:16:13 -- host/discovery.sh@109 -- # sleep 1 00:28:20.043 03:16:14 -- host/discovery.sh@110 -- # get_bdev_list 00:28:20.043 03:16:14 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.043 03:16:14 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.043 03:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.043 03:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:20.043 03:16:14 -- host/discovery.sh@55 -- # sort 00:28:20.043 03:16:14 -- host/discovery.sh@55 -- # xargs 00:28:20.043 03:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.043 03:16:14 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:20.043 03:16:14 -- host/discovery.sh@111 -- # get_notification_count 00:28:20.043 03:16:14 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:20.043 03:16:14 -- host/discovery.sh@74 -- # jq '. | length' 00:28:20.043 03:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.043 03:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:20.043 03:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.043 03:16:14 -- host/discovery.sh@74 -- # notification_count=1 00:28:20.043 03:16:14 -- host/discovery.sh@75 -- # notify_id=2 00:28:20.043 03:16:14 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:20.043 03:16:14 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:20.043 03:16:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.043 03:16:14 -- common/autotest_common.sh@10 -- # set +x 00:28:20.043 [2024-07-14 03:16:14.994839] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:20.043 [2024-07-14 03:16:14.995305] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:20.043 [2024-07-14 03:16:14.995362] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:20.043 03:16:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.043 03:16:14 -- host/discovery.sh@117 -- # sleep 1 00:28:20.043 [2024-07-14 03:16:15.123742] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:20.303 [2024-07-14 03:16:15.427166] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:20.303 [2024-07-14 03:16:15.427205] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:20.303 [2024-07-14 03:16:15.427217] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:20.871 03:16:16 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:20.871 03:16:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:20.871 03:16:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:20.871 03:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.871 03:16:16 -- host/discovery.sh@59 -- # sort 00:28:20.871 03:16:16 -- common/autotest_common.sh@10 -- # set +x 00:28:20.871 03:16:16 -- host/discovery.sh@59 -- # xargs 00:28:20.871 03:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.871 03:16:16 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.871 03:16:16 -- host/discovery.sh@119 -- # get_bdev_list 00:28:20.871 03:16:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.871 03:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.871 03:16:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:20.871 03:16:16 -- common/autotest_common.sh@10 -- # set +x 00:28:20.871 03:16:16 -- host/discovery.sh@55 -- # sort 00:28:20.871 03:16:16 -- host/discovery.sh@55 -- # xargs 00:28:20.871 03:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.871 03:16:16 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:20.871 03:16:16 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:20.871 03:16:16 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:20.871 03:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.871 03:16:16 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:20.871 03:16:16 -- common/autotest_common.sh@10 -- # set +x 00:28:20.871 03:16:16 -- host/discovery.sh@63 -- # sort -n 00:28:20.871 03:16:16 -- host/discovery.sh@63 -- # xargs 00:28:20.871 03:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.130 03:16:16 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:21.130 03:16:16 -- host/discovery.sh@121 -- # get_notification_count 00:28:21.130 03:16:16 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:21.130 03:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.130 03:16:16 -- host/discovery.sh@74 -- # jq '. | length' 00:28:21.130 03:16:16 -- common/autotest_common.sh@10 -- # set +x 00:28:21.130 03:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.130 03:16:16 -- host/discovery.sh@74 -- # notification_count=0 00:28:21.130 03:16:16 -- host/discovery.sh@75 -- # notify_id=2 00:28:21.130 03:16:16 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:21.130 03:16:16 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:21.130 03:16:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.130 03:16:16 -- common/autotest_common.sh@10 -- # set +x 00:28:21.130 [2024-07-14 03:16:16.166544] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:21.130 [2024-07-14 03:16:16.166574] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:21.130 03:16:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.130 03:16:16 -- host/discovery.sh@127 -- # sleep 1 00:28:21.130 [2024-07-14 03:16:16.172661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.130 [2024-07-14 03:16:16.172702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.130 [2024-07-14 03:16:16.172729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.130 [2024-07-14 03:16:16.172744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.130 [2024-07-14 03:16:16.172760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.130 [2024-07-14 03:16:16.172775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.130 [2024-07-14 03:16:16.172791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:21.130 [2024-07-14 03:16:16.172805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:21.130 [2024-07-14 03:16:16.172820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.130 [2024-07-14 03:16:16.182668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.130 [2024-07-14 03:16:16.192714] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.130 [2024-07-14 03:16:16.192998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.193201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.193241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.130 [2024-07-14 03:16:16.193260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.130 [2024-07-14 03:16:16.193285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.130 [2024-07-14 03:16:16.193309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.130 [2024-07-14 03:16:16.193325] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.130 [2024-07-14 03:16:16.193341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.130 [2024-07-14 03:16:16.193363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.130 [2024-07-14 03:16:16.202793] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.130 [2024-07-14 03:16:16.203069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.203278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.203306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.130 [2024-07-14 03:16:16.203325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.130 [2024-07-14 03:16:16.203350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.130 [2024-07-14 03:16:16.203372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.130 [2024-07-14 03:16:16.203387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.130 [2024-07-14 03:16:16.203403] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.130 [2024-07-14 03:16:16.203424] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.130 [2024-07-14 03:16:16.212877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.130 [2024-07-14 03:16:16.213174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.213381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.213410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.130 [2024-07-14 03:16:16.213429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.130 [2024-07-14 03:16:16.213455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.130 [2024-07-14 03:16:16.213478] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.130 [2024-07-14 03:16:16.213494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.130 [2024-07-14 03:16:16.213509] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.130 [2024-07-14 03:16:16.213531] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.130 [2024-07-14 03:16:16.222954] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.130 [2024-07-14 03:16:16.223325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.223528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.223556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.130 [2024-07-14 03:16:16.223575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.130 [2024-07-14 03:16:16.223599] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.130 [2024-07-14 03:16:16.223622] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.130 [2024-07-14 03:16:16.223638] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.130 [2024-07-14 03:16:16.223654] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.130 [2024-07-14 03:16:16.223675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.130 [2024-07-14 03:16:16.233059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.130 [2024-07-14 03:16:16.233303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.130 [2024-07-14 03:16:16.233533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.131 [2024-07-14 03:16:16.233558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.131 [2024-07-14 03:16:16.233575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.131 [2024-07-14 03:16:16.233597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.131 [2024-07-14 03:16:16.233617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.131 [2024-07-14 03:16:16.233646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.131 [2024-07-14 03:16:16.233659] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.131 [2024-07-14 03:16:16.233678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.131 [2024-07-14 03:16:16.243127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.131 [2024-07-14 03:16:16.243367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.131 [2024-07-14 03:16:16.243585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.131 [2024-07-14 03:16:16.243614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.131 [2024-07-14 03:16:16.243632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.131 [2024-07-14 03:16:16.243656] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.131 [2024-07-14 03:16:16.243678] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.131 [2024-07-14 03:16:16.243694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.131 [2024-07-14 03:16:16.243709] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.131 [2024-07-14 03:16:16.243730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.131 [2024-07-14 03:16:16.253203] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:21.131 [2024-07-14 03:16:16.253464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.131 [2024-07-14 03:16:16.253666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:21.131 [2024-07-14 03:16:16.253694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x780590 with addr=10.0.0.2, port=4420 00:28:21.131 [2024-07-14 03:16:16.253712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x780590 is same with the state(5) to be set 00:28:21.131 [2024-07-14 03:16:16.253737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x780590 (9): Bad file descriptor 00:28:21.131 [2024-07-14 03:16:16.253759] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:21.131 [2024-07-14 03:16:16.253775] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:21.131 [2024-07-14 03:16:16.253791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:21.131 [2024-07-14 03:16:16.253812] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:21.131 [2024-07-14 03:16:16.253860] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:21.131 [2024-07-14 03:16:16.253907] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:22.069 03:16:17 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:22.069 03:16:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:22.069 03:16:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:22.069 03:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.069 03:16:17 -- host/discovery.sh@59 -- # sort 00:28:22.069 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 03:16:17 -- host/discovery.sh@59 -- # xargs 00:28:22.069 03:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@129 -- # get_bdev_list 00:28:22.069 03:16:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.069 03:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.069 03:16:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:22.069 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 03:16:17 -- host/discovery.sh@55 -- # sort 00:28:22.069 03:16:17 -- host/discovery.sh@55 -- # xargs 00:28:22.069 03:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:22.069 03:16:17 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:22.069 03:16:17 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:22.069 03:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.069 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 03:16:17 -- host/discovery.sh@63 -- # sort -n 00:28:22.069 03:16:17 -- host/discovery.sh@63 -- # xargs 00:28:22.069 03:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:22.069 03:16:17 -- host/discovery.sh@131 -- # get_notification_count 00:28:22.069 03:16:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:22.069 03:16:17 -- host/discovery.sh@74 -- # jq '. | length' 00:28:22.069 03:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.069 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:28:22.069 03:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.328 03:16:17 -- host/discovery.sh@74 -- # notification_count=0 00:28:22.328 03:16:17 -- host/discovery.sh@75 -- # notify_id=2 00:28:22.328 03:16:17 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:22.328 03:16:17 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:22.328 03:16:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.328 03:16:17 -- common/autotest_common.sh@10 -- # set +x 00:28:22.328 03:16:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.328 03:16:17 -- host/discovery.sh@135 -- # sleep 1 00:28:23.302 03:16:18 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:23.302 03:16:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:23.302 03:16:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:23.302 03:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.302 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:28:23.302 03:16:18 -- host/discovery.sh@59 -- # sort 00:28:23.302 03:16:18 -- host/discovery.sh@59 -- # xargs 00:28:23.302 03:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.302 03:16:18 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:23.302 03:16:18 -- host/discovery.sh@137 -- # get_bdev_list 00:28:23.302 03:16:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:23.302 03:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.302 03:16:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:23.302 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:28:23.302 03:16:18 -- host/discovery.sh@55 -- # sort 00:28:23.302 03:16:18 -- host/discovery.sh@55 -- # xargs 00:28:23.302 03:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.302 03:16:18 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:23.302 03:16:18 -- host/discovery.sh@138 -- # get_notification_count 00:28:23.302 03:16:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:23.302 03:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.302 03:16:18 -- host/discovery.sh@74 -- # jq '. | length' 00:28:23.302 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:28:23.302 03:16:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:23.302 03:16:18 -- host/discovery.sh@74 -- # notification_count=2 00:28:23.302 03:16:18 -- host/discovery.sh@75 -- # notify_id=4 00:28:23.302 03:16:18 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:23.302 03:16:18 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:23.302 03:16:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:23.302 03:16:18 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 [2024-07-14 03:16:19.514200] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:24.677 [2024-07-14 03:16:19.514226] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:24.677 [2024-07-14 03:16:19.514248] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:24.677 [2024-07-14 03:16:19.600530] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:24.677 [2024-07-14 03:16:19.667541] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:24.677 [2024-07-14 03:16:19.667579] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@640 -- # local es=0 00:28:24.677 03:16:19 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.677 03:16:19 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 request: 00:28:24.677 { 00:28:24.677 "name": "nvme", 00:28:24.677 "trtype": "tcp", 00:28:24.677 "traddr": "10.0.0.2", 00:28:24.677 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:24.677 "adrfam": "ipv4", 00:28:24.677 "trsvcid": "8009", 00:28:24.677 "wait_for_attach": true, 00:28:24.677 "method": "bdev_nvme_start_discovery", 00:28:24.677 "req_id": 1 00:28:24.677 } 00:28:24.677 Got JSON-RPC error response 00:28:24.677 response: 00:28:24.677 { 00:28:24.677 "code": -17, 00:28:24.677 "message": "File exists" 00:28:24.677 } 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:24.677 03:16:19 -- common/autotest_common.sh@643 -- # es=1 00:28:24.677 03:16:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:24.677 03:16:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:24.677 03:16:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:24.677 03:16:19 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # sort 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # xargs 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:24.677 03:16:19 -- host/discovery.sh@147 -- # get_bdev_list 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # sort 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # xargs 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@640 -- # local es=0 00:28:24.677 03:16:19 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:24.677 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.677 03:16:19 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 request: 00:28:24.677 { 00:28:24.677 "name": "nvme_second", 00:28:24.677 "trtype": "tcp", 00:28:24.677 "traddr": "10.0.0.2", 00:28:24.677 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:24.677 "adrfam": "ipv4", 00:28:24.677 "trsvcid": "8009", 00:28:24.677 "wait_for_attach": true, 00:28:24.677 "method": "bdev_nvme_start_discovery", 00:28:24.677 "req_id": 1 00:28:24.677 } 00:28:24.677 Got JSON-RPC error response 00:28:24.677 response: 00:28:24.677 { 00:28:24.677 "code": -17, 00:28:24.677 "message": "File exists" 00:28:24.677 } 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:24.677 03:16:19 -- common/autotest_common.sh@643 -- # es=1 00:28:24.677 03:16:19 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:24.677 03:16:19 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:24.677 03:16:19 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:24.677 03:16:19 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # sort 00:28:24.677 03:16:19 -- host/discovery.sh@67 -- # xargs 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:24.677 03:16:19 -- host/discovery.sh@153 -- # get_bdev_list 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:24.677 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.677 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # sort 00:28:24.677 03:16:19 -- host/discovery.sh@55 -- # xargs 00:28:24.677 03:16:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:24.677 03:16:19 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:24.678 03:16:19 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.678 03:16:19 -- common/autotest_common.sh@640 -- # local es=0 00:28:24.678 03:16:19 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.678 03:16:19 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:24.678 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.678 03:16:19 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:24.678 03:16:19 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:24.678 03:16:19 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:24.678 03:16:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:24.678 03:16:19 -- common/autotest_common.sh@10 -- # set +x 00:28:25.615 [2024-07-14 03:16:20.867086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.615 [2024-07-14 03:16:20.867333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.615 [2024-07-14 03:16:20.867365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77ea80 with addr=10.0.0.2, port=8010 00:28:25.615 [2024-07-14 03:16:20.867401] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:25.615 [2024-07-14 03:16:20.867431] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:25.615 [2024-07-14 03:16:20.867458] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:26.991 [2024-07-14 03:16:21.869387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.991 [2024-07-14 03:16:21.869614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.991 [2024-07-14 03:16:21.869641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x77ea80 with addr=10.0.0.2, port=8010 00:28:26.991 [2024-07-14 03:16:21.869661] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:26.991 [2024-07-14 03:16:21.869675] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:26.991 [2024-07-14 03:16:21.869688] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:27.926 [2024-07-14 03:16:22.871628] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:27.926 request: 00:28:27.926 { 00:28:27.926 "name": "nvme_second", 00:28:27.926 "trtype": "tcp", 00:28:27.926 "traddr": "10.0.0.2", 00:28:27.926 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:27.926 "adrfam": "ipv4", 00:28:27.926 "trsvcid": "8010", 00:28:27.926 "attach_timeout_ms": 3000, 00:28:27.926 "method": "bdev_nvme_start_discovery", 00:28:27.926 "req_id": 1 00:28:27.926 } 00:28:27.926 Got JSON-RPC error response 00:28:27.926 response: 00:28:27.926 { 00:28:27.926 "code": -110, 00:28:27.926 "message": "Connection timed out" 00:28:27.926 } 00:28:27.926 03:16:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:27.926 03:16:22 -- common/autotest_common.sh@643 -- # es=1 00:28:27.926 03:16:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:27.926 03:16:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:27.926 03:16:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:27.926 03:16:22 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:27.926 03:16:22 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:27.926 03:16:22 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:27.926 03:16:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:27.926 03:16:22 -- host/discovery.sh@67 -- # sort 00:28:27.926 03:16:22 -- common/autotest_common.sh@10 -- # set +x 00:28:27.926 03:16:22 -- host/discovery.sh@67 -- # xargs 00:28:27.926 03:16:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:27.926 03:16:22 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:27.926 03:16:22 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:27.926 03:16:22 -- host/discovery.sh@162 -- # kill 2114330 00:28:27.926 03:16:22 -- host/discovery.sh@163 -- # nvmftestfini 00:28:27.926 03:16:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:27.926 03:16:22 -- nvmf/common.sh@116 -- # sync 00:28:27.926 03:16:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:27.926 03:16:22 -- nvmf/common.sh@119 -- # set +e 00:28:27.926 03:16:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:27.926 03:16:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:27.926 rmmod nvme_tcp 00:28:27.926 rmmod nvme_fabrics 00:28:27.926 rmmod nvme_keyring 00:28:27.926 03:16:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:27.926 03:16:22 -- nvmf/common.sh@123 -- # set -e 00:28:27.926 03:16:22 -- nvmf/common.sh@124 -- # return 0 00:28:27.926 03:16:22 -- nvmf/common.sh@477 -- # '[' -n 2114177 ']' 00:28:27.926 03:16:22 -- nvmf/common.sh@478 -- # killprocess 2114177 00:28:27.926 03:16:22 -- common/autotest_common.sh@926 -- # '[' -z 2114177 ']' 00:28:27.926 03:16:22 -- common/autotest_common.sh@930 -- # kill -0 2114177 00:28:27.926 03:16:22 -- common/autotest_common.sh@931 -- # uname 00:28:27.926 03:16:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:27.926 03:16:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2114177 00:28:27.926 03:16:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:27.926 03:16:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:27.926 03:16:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2114177' 00:28:27.926 killing process with pid 2114177 00:28:27.926 03:16:23 -- common/autotest_common.sh@945 -- # kill 2114177 00:28:27.926 03:16:23 -- common/autotest_common.sh@950 -- # wait 2114177 00:28:28.183 03:16:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:28.183 03:16:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:28.183 03:16:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:28.183 03:16:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:28.183 03:16:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:28.183 03:16:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.183 03:16:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:28.183 03:16:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.088 03:16:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:30.088 00:28:30.088 real 0m17.126s 00:28:30.088 user 0m26.750s 00:28:30.088 sys 0m2.795s 00:28:30.088 03:16:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.088 03:16:25 -- common/autotest_common.sh@10 -- # set +x 00:28:30.088 ************************************ 00:28:30.088 END TEST nvmf_discovery 00:28:30.088 ************************************ 00:28:30.088 03:16:25 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:30.088 03:16:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:30.088 03:16:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:30.088 03:16:25 -- common/autotest_common.sh@10 -- # set +x 00:28:30.088 ************************************ 00:28:30.088 START TEST nvmf_discovery_remove_ifc 00:28:30.088 ************************************ 00:28:30.088 03:16:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:30.347 * Looking for test storage... 00:28:30.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:30.347 03:16:25 -- nvmf/common.sh@7 -- # uname -s 00:28:30.347 03:16:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.347 03:16:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.347 03:16:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.347 03:16:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.347 03:16:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.347 03:16:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.347 03:16:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.347 03:16:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.347 03:16:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.347 03:16:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.347 03:16:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.347 03:16:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:30.347 03:16:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.347 03:16:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.347 03:16:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:30.347 03:16:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:30.347 03:16:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.347 03:16:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.347 03:16:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.347 03:16:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.347 03:16:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.347 03:16:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.347 03:16:25 -- paths/export.sh@5 -- # export PATH 00:28:30.347 03:16:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.347 03:16:25 -- nvmf/common.sh@46 -- # : 0 00:28:30.347 03:16:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:30.347 03:16:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:30.347 03:16:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:30.347 03:16:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.347 03:16:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.347 03:16:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:30.347 03:16:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:30.347 03:16:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:30.347 03:16:25 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:30.347 03:16:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:30.347 03:16:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.347 03:16:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:30.347 03:16:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:30.347 03:16:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:30.347 03:16:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.347 03:16:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.347 03:16:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.347 03:16:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:30.347 03:16:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:30.347 03:16:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:30.347 03:16:25 -- common/autotest_common.sh@10 -- # set +x 00:28:32.252 03:16:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:32.252 03:16:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:32.252 03:16:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:32.252 03:16:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:32.252 03:16:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:32.252 03:16:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:32.252 03:16:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:32.252 03:16:27 -- nvmf/common.sh@294 -- # net_devs=() 00:28:32.252 03:16:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:32.252 03:16:27 -- nvmf/common.sh@295 -- # e810=() 00:28:32.252 03:16:27 -- nvmf/common.sh@295 -- # local -ga e810 00:28:32.252 03:16:27 -- nvmf/common.sh@296 -- # x722=() 00:28:32.252 03:16:27 -- nvmf/common.sh@296 -- # local -ga x722 00:28:32.252 03:16:27 -- nvmf/common.sh@297 -- # mlx=() 00:28:32.252 03:16:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:32.252 03:16:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.252 03:16:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:32.252 03:16:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:32.252 03:16:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:32.252 03:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:32.252 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:32.252 03:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:32.252 03:16:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:32.252 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:32.252 03:16:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:32.252 03:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.252 03:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.252 03:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:32.252 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:32.252 03:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.252 03:16:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:32.252 03:16:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.252 03:16:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.252 03:16:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:32.252 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:32.252 03:16:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.252 03:16:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:32.252 03:16:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:32.252 03:16:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:32.252 03:16:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.252 03:16:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.252 03:16:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.252 03:16:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:32.252 03:16:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.252 03:16:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.252 03:16:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:32.252 03:16:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.252 03:16:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.252 03:16:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:32.252 03:16:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:32.252 03:16:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.252 03:16:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.252 03:16:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.252 03:16:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.252 03:16:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:32.252 03:16:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.253 03:16:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.253 03:16:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.253 03:16:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:32.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:28:32.253 00:28:32.253 --- 10.0.0.2 ping statistics --- 00:28:32.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.253 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:28:32.253 03:16:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:28:32.253 00:28:32.253 --- 10.0.0.1 ping statistics --- 00:28:32.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.253 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:28:32.253 03:16:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.253 03:16:27 -- nvmf/common.sh@410 -- # return 0 00:28:32.253 03:16:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:32.253 03:16:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.253 03:16:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:32.253 03:16:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:32.253 03:16:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.253 03:16:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:32.253 03:16:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:32.253 03:16:27 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:32.253 03:16:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:32.253 03:16:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:32.253 03:16:27 -- common/autotest_common.sh@10 -- # set +x 00:28:32.253 03:16:27 -- nvmf/common.sh@469 -- # nvmfpid=2117802 00:28:32.253 03:16:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:32.253 03:16:27 -- nvmf/common.sh@470 -- # waitforlisten 2117802 00:28:32.253 03:16:27 -- common/autotest_common.sh@819 -- # '[' -z 2117802 ']' 00:28:32.253 03:16:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.253 03:16:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:32.253 03:16:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.253 03:16:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:32.253 03:16:27 -- common/autotest_common.sh@10 -- # set +x 00:28:32.253 [2024-07-14 03:16:27.472671] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:32.253 [2024-07-14 03:16:27.472756] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.513 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.513 [2024-07-14 03:16:27.536906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.513 [2024-07-14 03:16:27.619080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:32.513 [2024-07-14 03:16:27.619239] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.513 [2024-07-14 03:16:27.619256] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.513 [2024-07-14 03:16:27.619268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.513 [2024-07-14 03:16:27.619303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.451 03:16:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:33.451 03:16:28 -- common/autotest_common.sh@852 -- # return 0 00:28:33.451 03:16:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:33.451 03:16:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:33.451 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:33.451 03:16:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.451 03:16:28 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:33.451 03:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.451 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:33.451 [2024-07-14 03:16:28.483239] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.451 [2024-07-14 03:16:28.491413] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:33.451 null0 00:28:33.451 [2024-07-14 03:16:28.523374] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.451 03:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.451 03:16:28 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2117955 00:28:33.451 03:16:28 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:33.451 03:16:28 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2117955 /tmp/host.sock 00:28:33.451 03:16:28 -- common/autotest_common.sh@819 -- # '[' -z 2117955 ']' 00:28:33.451 03:16:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:33.451 03:16:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:33.451 03:16:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:33.451 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:33.451 03:16:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:33.451 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:33.451 [2024-07-14 03:16:28.587365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:33.451 [2024-07-14 03:16:28.587449] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2117955 ] 00:28:33.451 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.451 [2024-07-14 03:16:28.653989] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:33.711 [2024-07-14 03:16:28.743470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:33.711 [2024-07-14 03:16:28.743644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.711 03:16:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:33.711 03:16:28 -- common/autotest_common.sh@852 -- # return 0 00:28:33.711 03:16:28 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.711 03:16:28 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:33.711 03:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.711 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:33.711 03:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.711 03:16:28 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:33.711 03:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.711 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:33.711 03:16:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:33.711 03:16:28 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:33.711 03:16:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.711 03:16:28 -- common/autotest_common.sh@10 -- # set +x 00:28:35.092 [2024-07-14 03:16:29.992039] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:35.092 [2024-07-14 03:16:29.992093] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:35.092 [2024-07-14 03:16:29.992119] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.092 [2024-07-14 03:16:30.079425] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:35.092 [2024-07-14 03:16:30.142282] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:35.092 [2024-07-14 03:16:30.142338] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:35.092 [2024-07-14 03:16:30.142381] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:35.092 [2024-07-14 03:16:30.142406] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:35.092 [2024-07-14 03:16:30.142445] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:35.092 03:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.092 03:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.092 03:16:30 -- common/autotest_common.sh@10 -- # set +x 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.092 [2024-07-14 03:16:30.149600] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1769f00 was disconnected and freed. delete nvme_qpair. 00:28:35.092 03:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:35.092 03:16:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:35.092 03:16:30 -- common/autotest_common.sh@10 -- # set +x 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:35.092 03:16:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:35.092 03:16:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.467 03:16:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:36.467 03:16:31 -- common/autotest_common.sh@10 -- # set +x 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:36.467 03:16:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:36.467 03:16:31 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.403 03:16:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:37.403 03:16:32 -- common/autotest_common.sh@10 -- # set +x 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:37.403 03:16:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:37.403 03:16:32 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.341 03:16:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:38.341 03:16:33 -- common/autotest_common.sh@10 -- # set +x 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:38.341 03:16:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:38.341 03:16:33 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:39.278 03:16:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.278 03:16:34 -- common/autotest_common.sh@10 -- # set +x 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:39.278 03:16:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:39.278 03:16:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:40.657 03:16:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:40.657 03:16:35 -- common/autotest_common.sh@10 -- # set +x 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:40.657 03:16:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:40.657 03:16:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:40.657 [2024-07-14 03:16:35.583685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:40.657 [2024-07-14 03:16:35.583747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.657 [2024-07-14 03:16:35.583769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.657 [2024-07-14 03:16:35.583787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.657 [2024-07-14 03:16:35.583802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.657 [2024-07-14 03:16:35.583817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.657 [2024-07-14 03:16:35.583832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.657 [2024-07-14 03:16:35.583847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.657 [2024-07-14 03:16:35.583862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.657 [2024-07-14 03:16:35.583885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:40.657 [2024-07-14 03:16:35.583915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:40.657 [2024-07-14 03:16:35.583927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731150 is same with the state(5) to be set 00:28:40.657 [2024-07-14 03:16:35.593704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1731150 (9): Bad file descriptor 00:28:40.657 [2024-07-14 03:16:35.603756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:41.624 03:16:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:41.624 03:16:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:41.624 03:16:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:41.624 03:16:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:41.624 03:16:36 -- common/autotest_common.sh@10 -- # set +x 00:28:41.624 03:16:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:41.624 03:16:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:41.624 [2024-07-14 03:16:36.637899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:42.563 [2024-07-14 03:16:37.661913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:42.563 [2024-07-14 03:16:37.661973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1731150 with addr=10.0.0.2, port=4420 00:28:42.563 [2024-07-14 03:16:37.662000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1731150 is same with the state(5) to be set 00:28:42.563 [2024-07-14 03:16:37.662036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:42.563 [2024-07-14 03:16:37.662055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:42.563 [2024-07-14 03:16:37.662069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:42.563 [2024-07-14 03:16:37.662085] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:42.563 [2024-07-14 03:16:37.662518] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1731150 (9): Bad file descriptor 00:28:42.563 [2024-07-14 03:16:37.662573] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:42.563 [2024-07-14 03:16:37.662615] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:42.563 [2024-07-14 03:16:37.662655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.563 [2024-07-14 03:16:37.662677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.563 [2024-07-14 03:16:37.662697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.563 [2024-07-14 03:16:37.662711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.563 [2024-07-14 03:16:37.662727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.563 [2024-07-14 03:16:37.662741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.563 [2024-07-14 03:16:37.662756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.563 [2024-07-14 03:16:37.662770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.563 [2024-07-14 03:16:37.662785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:42.563 [2024-07-14 03:16:37.662798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:42.563 [2024-07-14 03:16:37.662813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:42.563 [2024-07-14 03:16:37.663036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1730680 (9): Bad file descriptor 00:28:42.563 [2024-07-14 03:16:37.664053] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:42.563 [2024-07-14 03:16:37.664074] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:42.563 03:16:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.563 03:16:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:42.563 03:16:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.501 03:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.501 03:16:38 -- common/autotest_common.sh@10 -- # set +x 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.501 03:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.501 03:16:38 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:43.759 03:16:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:43.759 03:16:38 -- common/autotest_common.sh@10 -- # set +x 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:43.759 03:16:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:43.759 03:16:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.698 [2024-07-14 03:16:39.677463] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:44.698 [2024-07-14 03:16:39.677490] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:44.698 [2024-07-14 03:16:39.677513] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:44.698 03:16:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:44.698 03:16:39 -- common/autotest_common.sh@10 -- # set +x 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:44.698 03:16:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:44.698 [2024-07-14 03:16:39.805962] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:44.698 03:16:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:44.957 [2024-07-14 03:16:40.029492] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:44.957 [2024-07-14 03:16:40.029547] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:44.957 [2024-07-14 03:16:40.029584] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:44.957 [2024-07-14 03:16:40.029609] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:44.957 [2024-07-14 03:16:40.029624] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:44.957 [2024-07-14 03:16:40.036229] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x173e4c0 was disconnected and freed. delete nvme_qpair. 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:45.893 03:16:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:45.893 03:16:40 -- common/autotest_common.sh@10 -- # set +x 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:45.893 03:16:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:45.893 03:16:40 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2117955 00:28:45.893 03:16:40 -- common/autotest_common.sh@926 -- # '[' -z 2117955 ']' 00:28:45.893 03:16:40 -- common/autotest_common.sh@930 -- # kill -0 2117955 00:28:45.893 03:16:40 -- common/autotest_common.sh@931 -- # uname 00:28:45.893 03:16:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:45.893 03:16:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2117955 00:28:45.893 03:16:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:45.893 03:16:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:45.893 03:16:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2117955' 00:28:45.893 killing process with pid 2117955 00:28:45.893 03:16:40 -- common/autotest_common.sh@945 -- # kill 2117955 00:28:45.893 03:16:40 -- common/autotest_common.sh@950 -- # wait 2117955 00:28:45.893 03:16:41 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:45.893 03:16:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:45.893 03:16:41 -- nvmf/common.sh@116 -- # sync 00:28:45.893 03:16:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:45.893 03:16:41 -- nvmf/common.sh@119 -- # set +e 00:28:45.893 03:16:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:45.893 03:16:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:45.893 rmmod nvme_tcp 00:28:45.893 rmmod nvme_fabrics 00:28:46.153 rmmod nvme_keyring 00:28:46.153 03:16:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:46.153 03:16:41 -- nvmf/common.sh@123 -- # set -e 00:28:46.153 03:16:41 -- nvmf/common.sh@124 -- # return 0 00:28:46.153 03:16:41 -- nvmf/common.sh@477 -- # '[' -n 2117802 ']' 00:28:46.153 03:16:41 -- nvmf/common.sh@478 -- # killprocess 2117802 00:28:46.153 03:16:41 -- common/autotest_common.sh@926 -- # '[' -z 2117802 ']' 00:28:46.153 03:16:41 -- common/autotest_common.sh@930 -- # kill -0 2117802 00:28:46.153 03:16:41 -- common/autotest_common.sh@931 -- # uname 00:28:46.153 03:16:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:46.153 03:16:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2117802 00:28:46.153 03:16:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:46.153 03:16:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:46.153 03:16:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2117802' 00:28:46.153 killing process with pid 2117802 00:28:46.153 03:16:41 -- common/autotest_common.sh@945 -- # kill 2117802 00:28:46.153 03:16:41 -- common/autotest_common.sh@950 -- # wait 2117802 00:28:46.413 03:16:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:46.413 03:16:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:46.413 03:16:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:46.413 03:16:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:46.413 03:16:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:46.413 03:16:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.413 03:16:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:46.413 03:16:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.314 03:16:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:48.314 00:28:48.314 real 0m18.191s 00:28:48.314 user 0m25.315s 00:28:48.314 sys 0m2.916s 00:28:48.314 03:16:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:48.314 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:28:48.314 ************************************ 00:28:48.314 END TEST nvmf_discovery_remove_ifc 00:28:48.314 ************************************ 00:28:48.314 03:16:43 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:48.314 03:16:43 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:48.314 03:16:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:48.314 03:16:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:48.314 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:28:48.314 ************************************ 00:28:48.314 START TEST nvmf_digest 00:28:48.314 ************************************ 00:28:48.314 03:16:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:48.572 * Looking for test storage... 00:28:48.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:48.572 03:16:43 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:48.572 03:16:43 -- nvmf/common.sh@7 -- # uname -s 00:28:48.572 03:16:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:48.572 03:16:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:48.572 03:16:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:48.572 03:16:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:48.572 03:16:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:48.572 03:16:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:48.572 03:16:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:48.572 03:16:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:48.572 03:16:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:48.572 03:16:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:48.572 03:16:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.572 03:16:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:48.572 03:16:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:48.572 03:16:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:48.572 03:16:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:48.572 03:16:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:48.572 03:16:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:48.572 03:16:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:48.572 03:16:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:48.572 03:16:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.572 03:16:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.572 03:16:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.572 03:16:43 -- paths/export.sh@5 -- # export PATH 00:28:48.572 03:16:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:48.572 03:16:43 -- nvmf/common.sh@46 -- # : 0 00:28:48.572 03:16:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:48.572 03:16:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:48.572 03:16:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:48.572 03:16:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:48.572 03:16:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:48.572 03:16:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:48.572 03:16:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:48.572 03:16:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:48.573 03:16:43 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:48.573 03:16:43 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:48.573 03:16:43 -- host/digest.sh@16 -- # runtime=2 00:28:48.573 03:16:43 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:48.573 03:16:43 -- host/digest.sh@132 -- # nvmftestinit 00:28:48.573 03:16:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:48.573 03:16:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:48.573 03:16:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:48.573 03:16:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:48.573 03:16:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:48.573 03:16:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.573 03:16:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:48.573 03:16:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.573 03:16:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:48.573 03:16:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:48.573 03:16:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:48.573 03:16:43 -- common/autotest_common.sh@10 -- # set +x 00:28:50.477 03:16:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:50.477 03:16:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:50.477 03:16:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:50.477 03:16:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:50.477 03:16:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:50.477 03:16:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:50.477 03:16:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:50.477 03:16:45 -- nvmf/common.sh@294 -- # net_devs=() 00:28:50.477 03:16:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:50.477 03:16:45 -- nvmf/common.sh@295 -- # e810=() 00:28:50.477 03:16:45 -- nvmf/common.sh@295 -- # local -ga e810 00:28:50.477 03:16:45 -- nvmf/common.sh@296 -- # x722=() 00:28:50.477 03:16:45 -- nvmf/common.sh@296 -- # local -ga x722 00:28:50.477 03:16:45 -- nvmf/common.sh@297 -- # mlx=() 00:28:50.477 03:16:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:50.477 03:16:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.477 03:16:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:50.477 03:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:50.477 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:50.477 03:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:50.477 03:16:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:50.477 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:50.477 03:16:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:50.477 03:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.477 03:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.477 03:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:50.477 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:50.477 03:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:50.477 03:16:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.477 03:16:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.477 03:16:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:50.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:50.477 03:16:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:50.477 03:16:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:50.477 03:16:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.477 03:16:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.477 03:16:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:50.477 03:16:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.477 03:16:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.477 03:16:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:50.477 03:16:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.477 03:16:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.477 03:16:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:50.477 03:16:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:50.477 03:16:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.477 03:16:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.477 03:16:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.477 03:16:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.477 03:16:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:50.477 03:16:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.477 03:16:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.477 03:16:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.477 03:16:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:50.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:28:50.477 00:28:50.477 --- 10.0.0.2 ping statistics --- 00:28:50.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.477 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:50.477 03:16:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:28:50.477 00:28:50.477 --- 10.0.0.1 ping statistics --- 00:28:50.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.477 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:50.477 03:16:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.477 03:16:45 -- nvmf/common.sh@410 -- # return 0 00:28:50.477 03:16:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:50.477 03:16:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.477 03:16:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:50.477 03:16:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.477 03:16:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:50.477 03:16:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:50.477 03:16:45 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:50.477 03:16:45 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:50.477 03:16:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:50.477 03:16:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:50.477 03:16:45 -- common/autotest_common.sh@10 -- # set +x 00:28:50.477 ************************************ 00:28:50.477 START TEST nvmf_digest_clean 00:28:50.477 ************************************ 00:28:50.477 03:16:45 -- common/autotest_common.sh@1104 -- # run_digest 00:28:50.477 03:16:45 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:50.477 03:16:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:50.477 03:16:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:50.477 03:16:45 -- common/autotest_common.sh@10 -- # set +x 00:28:50.477 03:16:45 -- nvmf/common.sh@469 -- # nvmfpid=2121476 00:28:50.477 03:16:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:50.477 03:16:45 -- nvmf/common.sh@470 -- # waitforlisten 2121476 00:28:50.477 03:16:45 -- common/autotest_common.sh@819 -- # '[' -z 2121476 ']' 00:28:50.477 03:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.477 03:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:50.477 03:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.478 03:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:50.478 03:16:45 -- common/autotest_common.sh@10 -- # set +x 00:28:50.478 [2024-07-14 03:16:45.702603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:50.478 [2024-07-14 03:16:45.702678] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.737 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.737 [2024-07-14 03:16:45.774317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.737 [2024-07-14 03:16:45.865220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:50.737 [2024-07-14 03:16:45.865362] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.737 [2024-07-14 03:16:45.865379] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.737 [2024-07-14 03:16:45.865391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.737 [2024-07-14 03:16:45.865419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:50.737 03:16:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:50.737 03:16:45 -- common/autotest_common.sh@852 -- # return 0 00:28:50.737 03:16:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:50.737 03:16:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:50.737 03:16:45 -- common/autotest_common.sh@10 -- # set +x 00:28:50.737 03:16:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.737 03:16:45 -- host/digest.sh@120 -- # common_target_config 00:28:50.737 03:16:45 -- host/digest.sh@43 -- # rpc_cmd 00:28:50.737 03:16:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.737 03:16:45 -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 null0 00:28:50.995 [2024-07-14 03:16:46.065329] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.995 [2024-07-14 03:16:46.089543] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.995 03:16:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.995 03:16:46 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:50.995 03:16:46 -- host/digest.sh@77 -- # local rw bs qd 00:28:50.995 03:16:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:50.995 03:16:46 -- host/digest.sh@80 -- # rw=randread 00:28:50.995 03:16:46 -- host/digest.sh@80 -- # bs=4096 00:28:50.995 03:16:46 -- host/digest.sh@80 -- # qd=128 00:28:50.995 03:16:46 -- host/digest.sh@82 -- # bperfpid=2121505 00:28:50.995 03:16:46 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:50.995 03:16:46 -- host/digest.sh@83 -- # waitforlisten 2121505 /var/tmp/bperf.sock 00:28:50.995 03:16:46 -- common/autotest_common.sh@819 -- # '[' -z 2121505 ']' 00:28:50.995 03:16:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.995 03:16:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:50.995 03:16:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.995 03:16:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:50.995 03:16:46 -- common/autotest_common.sh@10 -- # set +x 00:28:50.995 [2024-07-14 03:16:46.132385] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:50.995 [2024-07-14 03:16:46.132447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121505 ] 00:28:50.995 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.995 [2024-07-14 03:16:46.193310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.252 [2024-07-14 03:16:46.282773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.252 03:16:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:51.253 03:16:46 -- common/autotest_common.sh@852 -- # return 0 00:28:51.253 03:16:46 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:51.253 03:16:46 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:51.253 03:16:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:51.510 03:16:46 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.510 03:16:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:51.767 nvme0n1 00:28:51.767 03:16:46 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:51.767 03:16:46 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:52.026 Running I/O for 2 seconds... 00:28:53.927 00:28:53.927 Latency(us) 00:28:53.927 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:53.927 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:53.927 nvme0n1 : 2.00 16793.82 65.60 0.00 0.00 7612.30 2487.94 19126.80 00:28:53.927 =================================================================================================================== 00:28:53.927 Total : 16793.82 65.60 0.00 0.00 7612.30 2487.94 19126.80 00:28:53.927 0 00:28:53.927 03:16:49 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:53.927 03:16:49 -- host/digest.sh@92 -- # get_accel_stats 00:28:53.927 03:16:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:53.927 03:16:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:53.927 | select(.opcode=="crc32c") 00:28:53.927 | "\(.module_name) \(.executed)"' 00:28:53.927 03:16:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:54.186 03:16:49 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:54.186 03:16:49 -- host/digest.sh@93 -- # exp_module=software 00:28:54.186 03:16:49 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:54.186 03:16:49 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:54.186 03:16:49 -- host/digest.sh@97 -- # killprocess 2121505 00:28:54.186 03:16:49 -- common/autotest_common.sh@926 -- # '[' -z 2121505 ']' 00:28:54.186 03:16:49 -- common/autotest_common.sh@930 -- # kill -0 2121505 00:28:54.187 03:16:49 -- common/autotest_common.sh@931 -- # uname 00:28:54.187 03:16:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:54.187 03:16:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2121505 00:28:54.187 03:16:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:54.187 03:16:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:54.187 03:16:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2121505' 00:28:54.187 killing process with pid 2121505 00:28:54.187 03:16:49 -- common/autotest_common.sh@945 -- # kill 2121505 00:28:54.187 Received shutdown signal, test time was about 2.000000 seconds 00:28:54.187 00:28:54.187 Latency(us) 00:28:54.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.187 =================================================================================================================== 00:28:54.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:54.187 03:16:49 -- common/autotest_common.sh@950 -- # wait 2121505 00:28:54.445 03:16:49 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:28:54.445 03:16:49 -- host/digest.sh@77 -- # local rw bs qd 00:28:54.445 03:16:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:54.445 03:16:49 -- host/digest.sh@80 -- # rw=randread 00:28:54.445 03:16:49 -- host/digest.sh@80 -- # bs=131072 00:28:54.445 03:16:49 -- host/digest.sh@80 -- # qd=16 00:28:54.445 03:16:49 -- host/digest.sh@82 -- # bperfpid=2121921 00:28:54.445 03:16:49 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:54.445 03:16:49 -- host/digest.sh@83 -- # waitforlisten 2121921 /var/tmp/bperf.sock 00:28:54.445 03:16:49 -- common/autotest_common.sh@819 -- # '[' -z 2121921 ']' 00:28:54.445 03:16:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:54.445 03:16:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:54.445 03:16:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:54.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:54.445 03:16:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:54.445 03:16:49 -- common/autotest_common.sh@10 -- # set +x 00:28:54.445 [2024-07-14 03:16:49.613684] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:54.445 [2024-07-14 03:16:49.613761] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121921 ] 00:28:54.445 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.445 Zero copy mechanism will not be used. 00:28:54.445 EAL: No free 2048 kB hugepages reported on node 1 00:28:54.445 [2024-07-14 03:16:49.671591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.703 [2024-07-14 03:16:49.755872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:54.703 03:16:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:54.703 03:16:49 -- common/autotest_common.sh@852 -- # return 0 00:28:54.703 03:16:49 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:54.704 03:16:49 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:54.704 03:16:49 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.961 03:16:50 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.962 03:16:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:55.532 nvme0n1 00:28:55.532 03:16:50 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:55.532 03:16:50 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:55.532 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:55.532 Zero copy mechanism will not be used. 00:28:55.532 Running I/O for 2 seconds... 00:28:58.065 00:28:58.065 Latency(us) 00:28:58.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.065 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:58.065 nvme0n1 : 2.00 2524.49 315.56 0.00 0.00 6333.51 5776.88 11553.75 00:28:58.065 =================================================================================================================== 00:28:58.065 Total : 2524.49 315.56 0.00 0.00 6333.51 5776.88 11553.75 00:28:58.065 0 00:28:58.065 03:16:52 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:58.065 03:16:52 -- host/digest.sh@92 -- # get_accel_stats 00:28:58.065 03:16:52 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:58.065 03:16:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:58.065 03:16:52 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:58.065 | select(.opcode=="crc32c") 00:28:58.065 | "\(.module_name) \(.executed)"' 00:28:58.065 03:16:52 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:58.065 03:16:52 -- host/digest.sh@93 -- # exp_module=software 00:28:58.065 03:16:52 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:58.065 03:16:52 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:58.065 03:16:52 -- host/digest.sh@97 -- # killprocess 2121921 00:28:58.065 03:16:52 -- common/autotest_common.sh@926 -- # '[' -z 2121921 ']' 00:28:58.065 03:16:52 -- common/autotest_common.sh@930 -- # kill -0 2121921 00:28:58.065 03:16:52 -- common/autotest_common.sh@931 -- # uname 00:28:58.065 03:16:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:58.065 03:16:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2121921 00:28:58.065 03:16:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:58.065 03:16:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:58.065 03:16:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2121921' 00:28:58.065 killing process with pid 2121921 00:28:58.065 03:16:52 -- common/autotest_common.sh@945 -- # kill 2121921 00:28:58.065 Received shutdown signal, test time was about 2.000000 seconds 00:28:58.065 00:28:58.065 Latency(us) 00:28:58.065 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:58.065 =================================================================================================================== 00:28:58.065 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:58.065 03:16:52 -- common/autotest_common.sh@950 -- # wait 2121921 00:28:58.065 03:16:53 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:28:58.065 03:16:53 -- host/digest.sh@77 -- # local rw bs qd 00:28:58.066 03:16:53 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.066 03:16:53 -- host/digest.sh@80 -- # rw=randwrite 00:28:58.066 03:16:53 -- host/digest.sh@80 -- # bs=4096 00:28:58.066 03:16:53 -- host/digest.sh@80 -- # qd=128 00:28:58.066 03:16:53 -- host/digest.sh@82 -- # bperfpid=2122391 00:28:58.066 03:16:53 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:58.066 03:16:53 -- host/digest.sh@83 -- # waitforlisten 2122391 /var/tmp/bperf.sock 00:28:58.066 03:16:53 -- common/autotest_common.sh@819 -- # '[' -z 2122391 ']' 00:28:58.066 03:16:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.066 03:16:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:58.066 03:16:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.066 03:16:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:58.066 03:16:53 -- common/autotest_common.sh@10 -- # set +x 00:28:58.066 [2024-07-14 03:16:53.263594] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:28:58.066 [2024-07-14 03:16:53.263676] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122391 ] 00:28:58.066 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.324 [2024-07-14 03:16:53.323603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.324 [2024-07-14 03:16:53.408357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.324 03:16:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:58.324 03:16:53 -- common/autotest_common.sh@852 -- # return 0 00:28:58.324 03:16:53 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:58.324 03:16:53 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:58.324 03:16:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:58.613 03:16:53 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.613 03:16:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:59.183 nvme0n1 00:28:59.183 03:16:54 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:59.183 03:16:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:59.183 Running I/O for 2 seconds... 00:29:01.720 00:29:01.720 Latency(us) 00:29:01.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.720 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:01.720 nvme0n1 : 2.00 20940.05 81.80 0.00 0.00 6104.17 2779.21 10340.12 00:29:01.720 =================================================================================================================== 00:29:01.720 Total : 20940.05 81.80 0.00 0.00 6104.17 2779.21 10340.12 00:29:01.720 0 00:29:01.720 03:16:56 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:01.720 03:16:56 -- host/digest.sh@92 -- # get_accel_stats 00:29:01.720 03:16:56 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:01.720 03:16:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:01.720 03:16:56 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:01.720 | select(.opcode=="crc32c") 00:29:01.720 | "\(.module_name) \(.executed)"' 00:29:01.720 03:16:56 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:01.720 03:16:56 -- host/digest.sh@93 -- # exp_module=software 00:29:01.720 03:16:56 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:01.720 03:16:56 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:01.720 03:16:56 -- host/digest.sh@97 -- # killprocess 2122391 00:29:01.720 03:16:56 -- common/autotest_common.sh@926 -- # '[' -z 2122391 ']' 00:29:01.720 03:16:56 -- common/autotest_common.sh@930 -- # kill -0 2122391 00:29:01.720 03:16:56 -- common/autotest_common.sh@931 -- # uname 00:29:01.720 03:16:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:01.720 03:16:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2122391 00:29:01.720 03:16:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:01.720 03:16:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:01.720 03:16:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2122391' 00:29:01.720 killing process with pid 2122391 00:29:01.720 03:16:56 -- common/autotest_common.sh@945 -- # kill 2122391 00:29:01.720 Received shutdown signal, test time was about 2.000000 seconds 00:29:01.720 00:29:01.720 Latency(us) 00:29:01.720 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.720 =================================================================================================================== 00:29:01.720 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.720 03:16:56 -- common/autotest_common.sh@950 -- # wait 2122391 00:29:01.720 03:16:56 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:01.720 03:16:56 -- host/digest.sh@77 -- # local rw bs qd 00:29:01.720 03:16:56 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:01.720 03:16:56 -- host/digest.sh@80 -- # rw=randwrite 00:29:01.720 03:16:56 -- host/digest.sh@80 -- # bs=131072 00:29:01.720 03:16:56 -- host/digest.sh@80 -- # qd=16 00:29:01.720 03:16:56 -- host/digest.sh@82 -- # bperfpid=2122880 00:29:01.720 03:16:56 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:01.720 03:16:56 -- host/digest.sh@83 -- # waitforlisten 2122880 /var/tmp/bperf.sock 00:29:01.720 03:16:56 -- common/autotest_common.sh@819 -- # '[' -z 2122880 ']' 00:29:01.720 03:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:01.720 03:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:01.720 03:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:01.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:01.720 03:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:01.720 03:16:56 -- common/autotest_common.sh@10 -- # set +x 00:29:01.979 [2024-07-14 03:16:56.975182] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:01.979 [2024-07-14 03:16:56.975304] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2122880 ] 00:29:01.979 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.979 Zero copy mechanism will not be used. 00:29:01.979 EAL: No free 2048 kB hugepages reported on node 1 00:29:01.979 [2024-07-14 03:16:57.037591] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.979 [2024-07-14 03:16:57.127705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.979 03:16:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:01.979 03:16:57 -- common/autotest_common.sh@852 -- # return 0 00:29:01.979 03:16:57 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:01.979 03:16:57 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:01.979 03:16:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:02.549 03:16:57 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.549 03:16:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.808 nvme0n1 00:29:02.808 03:16:58 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:02.808 03:16:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:03.068 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:03.068 Zero copy mechanism will not be used. 00:29:03.068 Running I/O for 2 seconds... 00:29:04.972 00:29:04.972 Latency(us) 00:29:04.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.972 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:04.972 nvme0n1 : 2.01 1605.89 200.74 0.00 0.00 9935.62 6310.87 19029.71 00:29:04.972 =================================================================================================================== 00:29:04.972 Total : 1605.89 200.74 0.00 0.00 9935.62 6310.87 19029.71 00:29:04.972 0 00:29:04.972 03:17:00 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:04.972 03:17:00 -- host/digest.sh@92 -- # get_accel_stats 00:29:04.972 03:17:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.972 03:17:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.972 03:17:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.972 | select(.opcode=="crc32c") 00:29:04.972 | "\(.module_name) \(.executed)"' 00:29:05.233 03:17:00 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:05.233 03:17:00 -- host/digest.sh@93 -- # exp_module=software 00:29:05.233 03:17:00 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:05.233 03:17:00 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:05.233 03:17:00 -- host/digest.sh@97 -- # killprocess 2122880 00:29:05.233 03:17:00 -- common/autotest_common.sh@926 -- # '[' -z 2122880 ']' 00:29:05.233 03:17:00 -- common/autotest_common.sh@930 -- # kill -0 2122880 00:29:05.233 03:17:00 -- common/autotest_common.sh@931 -- # uname 00:29:05.233 03:17:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.233 03:17:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2122880 00:29:05.233 03:17:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:05.233 03:17:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:05.233 03:17:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2122880' 00:29:05.233 killing process with pid 2122880 00:29:05.233 03:17:00 -- common/autotest_common.sh@945 -- # kill 2122880 00:29:05.233 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.233 00:29:05.233 Latency(us) 00:29:05.233 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.233 =================================================================================================================== 00:29:05.233 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.233 03:17:00 -- common/autotest_common.sh@950 -- # wait 2122880 00:29:05.494 03:17:00 -- host/digest.sh@126 -- # killprocess 2121476 00:29:05.494 03:17:00 -- common/autotest_common.sh@926 -- # '[' -z 2121476 ']' 00:29:05.494 03:17:00 -- common/autotest_common.sh@930 -- # kill -0 2121476 00:29:05.494 03:17:00 -- common/autotest_common.sh@931 -- # uname 00:29:05.494 03:17:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:05.494 03:17:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2121476 00:29:05.494 03:17:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:05.494 03:17:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:05.494 03:17:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2121476' 00:29:05.494 killing process with pid 2121476 00:29:05.494 03:17:00 -- common/autotest_common.sh@945 -- # kill 2121476 00:29:05.494 03:17:00 -- common/autotest_common.sh@950 -- # wait 2121476 00:29:05.753 00:29:05.754 real 0m15.246s 00:29:05.754 user 0m30.620s 00:29:05.754 sys 0m3.782s 00:29:05.754 03:17:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:05.754 03:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:05.754 ************************************ 00:29:05.754 END TEST nvmf_digest_clean 00:29:05.754 ************************************ 00:29:05.754 03:17:00 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:05.754 03:17:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:05.754 03:17:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:05.754 03:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:05.754 ************************************ 00:29:05.754 START TEST nvmf_digest_error 00:29:05.754 ************************************ 00:29:05.754 03:17:00 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:05.754 03:17:00 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:05.754 03:17:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:05.754 03:17:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:05.754 03:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:05.754 03:17:00 -- nvmf/common.sh@469 -- # nvmfpid=2123332 00:29:05.754 03:17:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:05.754 03:17:00 -- nvmf/common.sh@470 -- # waitforlisten 2123332 00:29:05.754 03:17:00 -- common/autotest_common.sh@819 -- # '[' -z 2123332 ']' 00:29:05.754 03:17:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.754 03:17:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:05.754 03:17:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.754 03:17:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:05.754 03:17:00 -- common/autotest_common.sh@10 -- # set +x 00:29:05.754 [2024-07-14 03:17:00.977791] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:05.754 [2024-07-14 03:17:00.977891] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.013 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.013 [2024-07-14 03:17:01.040690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.013 [2024-07-14 03:17:01.123985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:06.013 [2024-07-14 03:17:01.124141] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.013 [2024-07-14 03:17:01.124179] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.013 [2024-07-14 03:17:01.124192] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.013 [2024-07-14 03:17:01.124219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.013 03:17:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:06.013 03:17:01 -- common/autotest_common.sh@852 -- # return 0 00:29:06.013 03:17:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:06.013 03:17:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:06.013 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:06.013 03:17:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.013 03:17:01 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:06.013 03:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.013 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:06.013 [2024-07-14 03:17:01.204821] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:06.013 03:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.013 03:17:01 -- host/digest.sh@104 -- # common_target_config 00:29:06.013 03:17:01 -- host/digest.sh@43 -- # rpc_cmd 00:29:06.013 03:17:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:06.013 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:06.272 null0 00:29:06.272 [2024-07-14 03:17:01.316971] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.272 [2024-07-14 03:17:01.341175] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.272 03:17:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:06.272 03:17:01 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:06.272 03:17:01 -- host/digest.sh@54 -- # local rw bs qd 00:29:06.272 03:17:01 -- host/digest.sh@56 -- # rw=randread 00:29:06.272 03:17:01 -- host/digest.sh@56 -- # bs=4096 00:29:06.272 03:17:01 -- host/digest.sh@56 -- # qd=128 00:29:06.272 03:17:01 -- host/digest.sh@58 -- # bperfpid=2123476 00:29:06.272 03:17:01 -- host/digest.sh@60 -- # waitforlisten 2123476 /var/tmp/bperf.sock 00:29:06.272 03:17:01 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:06.272 03:17:01 -- common/autotest_common.sh@819 -- # '[' -z 2123476 ']' 00:29:06.272 03:17:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:06.272 03:17:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:06.272 03:17:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:06.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:06.272 03:17:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:06.272 03:17:01 -- common/autotest_common.sh@10 -- # set +x 00:29:06.272 [2024-07-14 03:17:01.384780] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:06.272 [2024-07-14 03:17:01.384839] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123476 ] 00:29:06.272 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.272 [2024-07-14 03:17:01.446101] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.532 [2024-07-14 03:17:01.536367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.101 03:17:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:07.101 03:17:02 -- common/autotest_common.sh@852 -- # return 0 00:29:07.101 03:17:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.101 03:17:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:07.358 03:17:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:07.358 03:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:07.358 03:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:07.358 03:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:07.358 03:17:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.358 03:17:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.928 nvme0n1 00:29:07.928 03:17:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:07.928 03:17:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:07.928 03:17:02 -- common/autotest_common.sh@10 -- # set +x 00:29:07.928 03:17:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:07.928 03:17:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.928 03:17:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.928 Running I/O for 2 seconds... 00:29:07.928 [2024-07-14 03:17:03.019457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.019536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.034459] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.034490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:21652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.034521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.049940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.049985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.050001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.065612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.065643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.065659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.080559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.080591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.080609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.095066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.095098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.095115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.106160] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.106203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.106219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.121182] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.121210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.121241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.135720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.135750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.135766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.150293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.150324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.150341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.161497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.161525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.161555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.928 [2024-07-14 03:17:03.176846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:07.928 [2024-07-14 03:17:03.176883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.928 [2024-07-14 03:17:03.176902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.191221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.191250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.191281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.206688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.206717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.206750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.221543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.221573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.221590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.236110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.236140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.236157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.246674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.246702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.246733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.261628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.261657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.261693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.276374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.276406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.276423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.291555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.291585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.291602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.302042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.302071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.302087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.317120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.317150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.317167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.331535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.331564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:4474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.331595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.346741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.346772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.362128] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.362159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.375580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.375618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.375635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.386702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.386733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.386750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.401632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.401660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.401691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.416563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.416592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.416622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.431672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.431703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.431719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.198 [2024-07-14 03:17:03.445628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.198 [2024-07-14 03:17:03.445658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.198 [2024-07-14 03:17:03.445675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.460 [2024-07-14 03:17:03.457345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.460 [2024-07-14 03:17:03.457375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.460 [2024-07-14 03:17:03.457391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.460 [2024-07-14 03:17:03.472410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.460 [2024-07-14 03:17:03.472448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.460 [2024-07-14 03:17:03.472478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.460 [2024-07-14 03:17:03.487529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.460 [2024-07-14 03:17:03.487574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.460 [2024-07-14 03:17:03.487590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.460 [2024-07-14 03:17:03.502077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.502108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.502131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.517152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.517181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.517198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.527462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.527489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:9990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.527521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.542700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.542728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.542744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.556672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.556701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.556733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.572271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.572301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.572317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.583311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.583338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.583368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.598416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.598443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.598474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.613607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.613635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.613666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.627272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.627313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:9470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.627332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.637975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.638005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.638021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.654897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.654946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.654962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.670823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.670857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.670885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.681894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.681943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.681961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.461 [2024-07-14 03:17:03.698216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.461 [2024-07-14 03:17:03.698250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.461 [2024-07-14 03:17:03.698268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.715808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.715843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.715862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.731668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.731701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.731719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.749175] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.749204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.749237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.763542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.763575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.763593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.775830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.775863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.775890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.791596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.791629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.791648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.807612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.807647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.807665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.823840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.823881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.823902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.838753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.838786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.838804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.851120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.851150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.851166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.867163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.867211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.867229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.883791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.883824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.883849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.899967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.900004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.900020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.911553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.911585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18109 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.911604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.929297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.929330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.929348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.946735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.946769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.946787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.721 [2024-07-14 03:17:03.962936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.721 [2024-07-14 03:17:03.962966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.721 [2024-07-14 03:17:03.962982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:03.977660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:03.977693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:03.977712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:03.989479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:03.989513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:03.989531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.005103] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.005130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.005145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.021021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.021051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.021066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.033809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.033842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.033860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.045880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.045912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.045945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.058101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.058131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.058147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.071289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.071322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.071340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.083271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.083303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.083321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.095278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.095311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.095329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.108560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.108593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.108611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.120734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.120767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.120791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.132958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.132986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.133002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.145635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.145667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.145685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.157846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.157889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.157923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.169714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.169744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.169761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.181747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.181779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.181797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.194032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.194060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.194075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.205994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.206022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.206038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.219119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.219148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.219164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.982 [2024-07-14 03:17:04.231295] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:08.982 [2024-07-14 03:17:04.231334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.982 [2024-07-14 03:17:04.231354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.243615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.243648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.243666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.256923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.256952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.256969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.269099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.269127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.269143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.281044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.281073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:2930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.281089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.293835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.293884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.293906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.241 [2024-07-14 03:17:04.305811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.241 [2024-07-14 03:17:04.305844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.241 [2024-07-14 03:17:04.305862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.318202] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.318235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.318253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.331343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.331376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.331393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.343266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.343298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.343317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.356438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.356472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.356490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.368406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.368439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.368457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.380406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.380438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.380457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.393749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.393781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.393799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.405648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.405681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.405699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.417779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.417811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.417829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.430549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.430581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:4669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.430599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.442647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.442679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.442703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.455027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.455055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.455071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.467739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.467771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.467789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.480104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.480134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.480150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.242 [2024-07-14 03:17:04.492208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.242 [2024-07-14 03:17:04.492241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.242 [2024-07-14 03:17:04.492260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.505347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.505381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.505399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.517821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.517855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.517882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.529876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.529910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.529943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.542950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.542994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.543010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.555146] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.555197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.555229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.567497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.567540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.567558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.580360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.580394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.580412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.592437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.592469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.592487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.604538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.604570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.604588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.616962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.616991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:15560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.629492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.629526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.629545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.641617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.641650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:24536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.641668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.654083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.654113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.654138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.666396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.666429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.666448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.678823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.678856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:7648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.678883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.691722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.691756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.691774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.704064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.704094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.704111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.716214] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.501 [2024-07-14 03:17:04.716246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.501 [2024-07-14 03:17:04.716264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.501 [2024-07-14 03:17:04.729275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.502 [2024-07-14 03:17:04.729307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.502 [2024-07-14 03:17:04.729325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.502 [2024-07-14 03:17:04.741333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.502 [2024-07-14 03:17:04.741366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.502 [2024-07-14 03:17:04.741385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.502 [2024-07-14 03:17:04.753485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.502 [2024-07-14 03:17:04.753518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.502 [2024-07-14 03:17:04.753536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.766772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.766811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.766830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.779118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.779146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.779162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.791930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.791961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.791977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.803836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.803877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.803912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.816213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.816246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:19141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.816265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.829036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.829066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.829097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.841011] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.841039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.841055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.853743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.853776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.853795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.865946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.865976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.865992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.878369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.878402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.878420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.890537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.890568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.890587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.903493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.903526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.903545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.915890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.915934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.915950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.927775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.927807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.927826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.940986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.941015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.941031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.952845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.952886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.952920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.966032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.966062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.966079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.978260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.978293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.978317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:04.990638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:04.990670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:04.990688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 [2024-07-14 03:17:05.002612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2317e30) 00:29:09.760 [2024-07-14 03:17:05.002644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.760 [2024-07-14 03:17:05.002662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.760 00:29:09.760 Latency(us) 00:29:09.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.760 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:09.760 nvme0n1 : 2.00 19025.94 74.32 0.00 0.00 6719.54 2463.67 18252.99 00:29:09.760 =================================================================================================================== 00:29:09.760 Total : 19025.94 74.32 0.00 0.00 6719.54 2463.67 18252.99 00:29:09.760 0 00:29:10.018 03:17:05 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:10.018 03:17:05 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:10.018 03:17:05 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:10.018 03:17:05 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:10.018 | .driver_specific 00:29:10.018 | .nvme_error 00:29:10.018 | .status_code 00:29:10.018 | .command_transient_transport_error' 00:29:10.018 03:17:05 -- host/digest.sh@71 -- # (( 149 > 0 )) 00:29:10.018 03:17:05 -- host/digest.sh@73 -- # killprocess 2123476 00:29:10.018 03:17:05 -- common/autotest_common.sh@926 -- # '[' -z 2123476 ']' 00:29:10.018 03:17:05 -- common/autotest_common.sh@930 -- # kill -0 2123476 00:29:10.018 03:17:05 -- common/autotest_common.sh@931 -- # uname 00:29:10.018 03:17:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:10.018 03:17:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2123476 00:29:10.276 03:17:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:10.276 03:17:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:10.276 03:17:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2123476' 00:29:10.276 killing process with pid 2123476 00:29:10.276 03:17:05 -- common/autotest_common.sh@945 -- # kill 2123476 00:29:10.276 Received shutdown signal, test time was about 2.000000 seconds 00:29:10.276 00:29:10.276 Latency(us) 00:29:10.276 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.276 =================================================================================================================== 00:29:10.276 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:10.276 03:17:05 -- common/autotest_common.sh@950 -- # wait 2123476 00:29:10.276 03:17:05 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:10.276 03:17:05 -- host/digest.sh@54 -- # local rw bs qd 00:29:10.276 03:17:05 -- host/digest.sh@56 -- # rw=randread 00:29:10.276 03:17:05 -- host/digest.sh@56 -- # bs=131072 00:29:10.276 03:17:05 -- host/digest.sh@56 -- # qd=16 00:29:10.276 03:17:05 -- host/digest.sh@58 -- # bperfpid=2123907 00:29:10.276 03:17:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:10.276 03:17:05 -- host/digest.sh@60 -- # waitforlisten 2123907 /var/tmp/bperf.sock 00:29:10.276 03:17:05 -- common/autotest_common.sh@819 -- # '[' -z 2123907 ']' 00:29:10.276 03:17:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.276 03:17:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:10.277 03:17:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.277 03:17:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:10.277 03:17:05 -- common/autotest_common.sh@10 -- # set +x 00:29:10.277 [2024-07-14 03:17:05.529106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:10.277 [2024-07-14 03:17:05.529182] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123907 ] 00:29:10.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.277 Zero copy mechanism will not be used. 00:29:10.536 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.536 [2024-07-14 03:17:05.591022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.536 [2024-07-14 03:17:05.678637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.472 03:17:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:11.472 03:17:06 -- common/autotest_common.sh@852 -- # return 0 00:29:11.472 03:17:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.472 03:17:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:11.730 03:17:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:11.730 03:17:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.730 03:17:06 -- common/autotest_common.sh@10 -- # set +x 00:29:11.730 03:17:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.730 03:17:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.730 03:17:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.987 nvme0n1 00:29:11.987 03:17:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:11.987 03:17:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:11.987 03:17:07 -- common/autotest_common.sh@10 -- # set +x 00:29:11.987 03:17:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:11.987 03:17:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:11.987 03:17:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.247 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:12.248 Zero copy mechanism will not be used. 00:29:12.248 Running I/O for 2 seconds... 00:29:12.248 [2024-07-14 03:17:07.269544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.269620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.269641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.281293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.281339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.281357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.292915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.292956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.292974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.304504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.304533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.304565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.316499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.316532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.316550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.327904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.327944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.327961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.339586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.339615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.339631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.351875] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.351907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.351925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.363410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.363438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.363455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.374965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.374994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.375011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.386480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.386510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.386527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.398627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.398661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.398680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.411340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.411373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.411392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.424086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.424129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.424146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.436677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.436709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.436728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.449477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.449508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.449528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.462159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.462201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.462217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.474951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.474979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.474996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.487663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.487695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.487714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.248 [2024-07-14 03:17:07.500542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.248 [2024-07-14 03:17:07.500581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.248 [2024-07-14 03:17:07.500601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.513203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.513247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.513263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.525947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.525975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.525991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.538505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.538556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.551343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.551375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.551393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.564199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.564232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.564250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.577022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.577065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.577081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.590893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.590938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.590954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.603724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.603756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.603775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.616487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.616520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.509 [2024-07-14 03:17:07.616538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.509 [2024-07-14 03:17:07.629213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.509 [2024-07-14 03:17:07.629245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.629264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.641932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.641976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.641993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.654569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.654601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.654620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.667180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.667213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.667231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.679752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.679785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.679804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.692466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.692499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.692517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.705245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.705277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.705296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.717861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.717913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.717939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.730593] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.730626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.730644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.743262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.743294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.743312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.510 [2024-07-14 03:17:07.756124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.510 [2024-07-14 03:17:07.756170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.510 [2024-07-14 03:17:07.756188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.768819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.768853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.768880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.781448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.781481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.781499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.794174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.794224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.794243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.806733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.806765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.806783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.819541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.819574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.819593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.832173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.832219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.832235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.844819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.844851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.844877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.857517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.857549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.857568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.870491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.870523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.870542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.883180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.883222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.883239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.896144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.896180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.896211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.769 [2024-07-14 03:17:07.908779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.769 [2024-07-14 03:17:07.908811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.769 [2024-07-14 03:17:07.908829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.921331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.921363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.921381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.934165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.934214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.934235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.946861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.946915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.946933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.959442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.959475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.959494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.972114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.972158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.972175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.984821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.984854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.984883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:07.997711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:07.997743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:07.997762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:12.770 [2024-07-14 03:17:08.010604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:12.770 [2024-07-14 03:17:08.010637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.770 [2024-07-14 03:17:08.010656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.029 [2024-07-14 03:17:08.023203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.029 [2024-07-14 03:17:08.023233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.029 [2024-07-14 03:17:08.023264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.029 [2024-07-14 03:17:08.035702] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.029 [2024-07-14 03:17:08.035735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.029 [2024-07-14 03:17:08.035754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.029 [2024-07-14 03:17:08.048457] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.029 [2024-07-14 03:17:08.048496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.029 [2024-07-14 03:17:08.048515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.029 [2024-07-14 03:17:08.061150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.029 [2024-07-14 03:17:08.061194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.029 [2024-07-14 03:17:08.061213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.029 [2024-07-14 03:17:08.073810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.029 [2024-07-14 03:17:08.073843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.073861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.086544] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.086575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.086593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.099127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.099155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.099171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.111645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.111677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.111695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.124258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.124290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.124307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.136924] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.136953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.136986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.149574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.149606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.149625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.162196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.162241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.162260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.174920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.174964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.187570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.187601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.187620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.200232] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.200264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.200282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.212862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.212915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.212931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.225734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.225766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.225784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.238350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.238382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.238401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.251002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.251044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.251061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.263781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.263813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.263837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.030 [2024-07-14 03:17:08.276406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.030 [2024-07-14 03:17:08.276437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.030 [2024-07-14 03:17:08.276455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.289186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.289220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.289238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.301752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.301784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.301803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.314669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.314701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.314719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.327529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.327562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.327581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.340222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.340265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.340281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.353093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.353120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.353136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.365893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.365938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.365955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.378582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.378620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.378640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.391186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.391233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.391252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.403646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.403679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.403698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.416249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.416282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.416299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.428863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.428918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.289 [2024-07-14 03:17:08.428935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.289 [2024-07-14 03:17:08.441627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.289 [2024-07-14 03:17:08.441660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.441678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.454336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.454368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.454387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.467072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.467101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.467118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.479945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.479975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.480011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.492535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.492567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.492585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.505120] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.505162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.505179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.517738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.517772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.517791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.290 [2024-07-14 03:17:08.530324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.290 [2024-07-14 03:17:08.530356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.290 [2024-07-14 03:17:08.530374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.542853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.542894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.542927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.555371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.555403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.555422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.568174] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.568220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.568238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.580999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.581027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.581043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.593724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.593760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.593779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.606764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.606796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.606815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.619460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.619491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.619509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.632168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.632214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.632233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.644820] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.644852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.644879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.657411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.657443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.657461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.670094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.670120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.670136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.682705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.682736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.682754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.695337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.695369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.695387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.707890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.707936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.707953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.720465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.720497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.720515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.733187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.733220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.733238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.745705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.745737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.745755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.758404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.758436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.758454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.771150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.771178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.771195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.783794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.548 [2024-07-14 03:17:08.783826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.548 [2024-07-14 03:17:08.783844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.548 [2024-07-14 03:17:08.796446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.549 [2024-07-14 03:17:08.796478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.549 [2024-07-14 03:17:08.796496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.824 [2024-07-14 03:17:08.809241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.809274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.809298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.821879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.821924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.821940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.834523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.834555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.834574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.847123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.847151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.847167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.859883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.859928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.859945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.872551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.872583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.872601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.885171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.885213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.885229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.897829] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.897861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.897889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.910548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.910580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.910598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.923216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.923248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.923267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.935873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.935918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.935934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.948438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.948470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.948488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.961042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.961070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.961086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.973692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.973724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.973742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.986322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.986354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.986372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:08.999070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:08.999098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:08.999114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:09.011680] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:09.011712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:09.011731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:09.024337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:09.024367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:09.024391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:09.037068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:09.037098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:09.037116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:09.049737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:09.049769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:09.049787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:13.825 [2024-07-14 03:17:09.062211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:13.825 [2024-07-14 03:17:09.062244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:13.825 [2024-07-14 03:17:09.062262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.074381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.074415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.074434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.086682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.086712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.086729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.098203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.098259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.098277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.109783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.109827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.109843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.121725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.121758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.121776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.134428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.134466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.134485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.147220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.147252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.159913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.159944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.159960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.172585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.172617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.172635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.185167] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.185212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.185231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.197850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.112 [2024-07-14 03:17:09.197891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.112 [2024-07-14 03:17:09.197924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.112 [2024-07-14 03:17:09.210455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.113 [2024-07-14 03:17:09.210488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.113 [2024-07-14 03:17:09.210506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.113 [2024-07-14 03:17:09.223162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.113 [2024-07-14 03:17:09.223204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.113 [2024-07-14 03:17:09.223219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:14.113 [2024-07-14 03:17:09.235841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.113 [2024-07-14 03:17:09.235881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.113 [2024-07-14 03:17:09.235915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:14.113 [2024-07-14 03:17:09.248448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.113 [2024-07-14 03:17:09.248480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.113 [2024-07-14 03:17:09.248498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:14.113 [2024-07-14 03:17:09.260974] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b7bd30) 00:29:14.113 [2024-07-14 03:17:09.261002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:14.113 [2024-07-14 03:17:09.261019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:14.113 00:29:14.113 Latency(us) 00:29:14.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.113 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:14.113 nvme0n1 : 2.01 2457.71 307.21 0.00 0.00 6504.81 5631.24 13689.74 00:29:14.113 =================================================================================================================== 00:29:14.113 Total : 2457.71 307.21 0.00 0.00 6504.81 5631.24 13689.74 00:29:14.113 0 00:29:14.113 03:17:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:14.113 03:17:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:14.113 03:17:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:14.113 03:17:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:14.113 | .driver_specific 00:29:14.113 | .nvme_error 00:29:14.113 | .status_code 00:29:14.113 | .command_transient_transport_error' 00:29:14.373 03:17:09 -- host/digest.sh@71 -- # (( 159 > 0 )) 00:29:14.373 03:17:09 -- host/digest.sh@73 -- # killprocess 2123907 00:29:14.373 03:17:09 -- common/autotest_common.sh@926 -- # '[' -z 2123907 ']' 00:29:14.373 03:17:09 -- common/autotest_common.sh@930 -- # kill -0 2123907 00:29:14.373 03:17:09 -- common/autotest_common.sh@931 -- # uname 00:29:14.373 03:17:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:14.373 03:17:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2123907 00:29:14.373 03:17:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:14.373 03:17:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:14.373 03:17:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2123907' 00:29:14.373 killing process with pid 2123907 00:29:14.373 03:17:09 -- common/autotest_common.sh@945 -- # kill 2123907 00:29:14.373 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.373 00:29:14.373 Latency(us) 00:29:14.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.373 =================================================================================================================== 00:29:14.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.373 03:17:09 -- common/autotest_common.sh@950 -- # wait 2123907 00:29:14.632 03:17:09 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:14.632 03:17:09 -- host/digest.sh@54 -- # local rw bs qd 00:29:14.632 03:17:09 -- host/digest.sh@56 -- # rw=randwrite 00:29:14.632 03:17:09 -- host/digest.sh@56 -- # bs=4096 00:29:14.632 03:17:09 -- host/digest.sh@56 -- # qd=128 00:29:14.632 03:17:09 -- host/digest.sh@58 -- # bperfpid=2124458 00:29:14.632 03:17:09 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:14.632 03:17:09 -- host/digest.sh@60 -- # waitforlisten 2124458 /var/tmp/bperf.sock 00:29:14.632 03:17:09 -- common/autotest_common.sh@819 -- # '[' -z 2124458 ']' 00:29:14.632 03:17:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.632 03:17:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:14.632 03:17:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.632 03:17:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:14.632 03:17:09 -- common/autotest_common.sh@10 -- # set +x 00:29:14.632 [2024-07-14 03:17:09.813523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:14.632 [2024-07-14 03:17:09.813602] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124458 ] 00:29:14.632 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.632 [2024-07-14 03:17:09.874584] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.891 [2024-07-14 03:17:09.960119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.825 03:17:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:15.825 03:17:10 -- common/autotest_common.sh@852 -- # return 0 00:29:15.825 03:17:10 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.826 03:17:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:15.826 03:17:10 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:15.826 03:17:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:15.826 03:17:10 -- common/autotest_common.sh@10 -- # set +x 00:29:15.826 03:17:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:15.826 03:17:10 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.826 03:17:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.394 nvme0n1 00:29:16.394 03:17:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:16.394 03:17:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:16.394 03:17:11 -- common/autotest_common.sh@10 -- # set +x 00:29:16.394 03:17:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:16.394 03:17:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:16.394 03:17:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:16.394 Running I/O for 2 seconds... 00:29:16.394 [2024-07-14 03:17:11.566252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.567560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.567616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.578671] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.580057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.580087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.591029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.592350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.592385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.603149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.604531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.615510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.616836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.616877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.627389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.628742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.628775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:16.394 [2024-07-14 03:17:11.639607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.394 [2024-07-14 03:17:11.641171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.394 [2024-07-14 03:17:11.641204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.652249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.656 [2024-07-14 03:17:11.653625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.653658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.664333] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f0350 00:29:16.656 [2024-07-14 03:17:11.665736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.665768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.676400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ea680 00:29:16.656 [2024-07-14 03:17:11.677799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.677831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.688605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f0350 00:29:16.656 [2024-07-14 03:17:11.690052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.690080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.700875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e6738 00:29:16.656 [2024-07-14 03:17:11.702300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.702338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.713010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e49b0 00:29:16.656 [2024-07-14 03:17:11.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.714470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.725149] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190eee38 00:29:16.656 [2024-07-14 03:17:11.726616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.726648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.737221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190eb328 00:29:16.656 [2024-07-14 03:17:11.738631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:12725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.738663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.749328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e4578 00:29:16.656 [2024-07-14 03:17:11.750783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:22790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.750815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.761247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e9e10 00:29:16.656 [2024-07-14 03:17:11.762422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.762450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.773297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ea248 00:29:16.656 [2024-07-14 03:17:11.774463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.774506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.786000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f4b08 00:29:16.656 [2024-07-14 03:17:11.786563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.798378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.799662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.810668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.811989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.812018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.823080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.824381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.824410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.835329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.836645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.836674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.847990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.849325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.849354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.860398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.861726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.861755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.872967] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.874345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.874374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.885400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.886746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.886774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:16.656 [2024-07-14 03:17:11.897987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.656 [2024-07-14 03:17:11.899381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.656 [2024-07-14 03:17:11.899409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.910738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:11.912154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.912182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.923207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:11.924609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.924637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.935545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:11.937031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:2150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.937059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.948096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:11.949479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.949525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.960520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:11.961933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:4304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.961960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.972858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f0788 00:29:16.916 [2024-07-14 03:17:11.974327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.974359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.985262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190e99d8 00:29:16.916 [2024-07-14 03:17:11.986722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:23527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.986748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:11.996283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ecc78 00:29:16.916 [2024-07-14 03:17:11.997313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:11.997356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.008732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.009774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.009820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.021281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.022355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.022404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.033695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.034752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.034797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.046092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.047243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.047274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.058390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.059513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.059544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.070742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.071841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.071880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.083013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.084137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.084181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.095479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.096588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.096620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.107775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.108944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:22211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.108972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.120196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.121396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.121427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.132611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.133808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.133839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.145031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.146263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.146294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:16.916 [2024-07-14 03:17:12.157213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:16.916 [2024-07-14 03:17:12.158414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:16.916 [2024-07-14 03:17:12.158445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.169721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:17.175 [2024-07-14 03:17:12.170911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.170953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.182100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:17.175 [2024-07-14 03:17:12.183332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.183363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.194141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:17.175 [2024-07-14 03:17:12.195385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.195416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.206202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:17.175 [2024-07-14 03:17:12.207500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.207531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.219602] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ed4e8 00:29:17.175 [2024-07-14 03:17:12.220862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.220902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.230458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ee190 00:29:17.175 [2024-07-14 03:17:12.231687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.231719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.242770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ee190 00:29:17.175 [2024-07-14 03:17:12.244039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.244066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.254937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f0350 00:29:17.175 [2024-07-14 03:17:12.256304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.256336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.268774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ea248 00:29:17.175 [2024-07-14 03:17:12.270160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.270188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.279371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ea680 00:29:17.175 [2024-07-14 03:17:12.280716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.280747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.291853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190ea680 00:29:17.175 [2024-07-14 03:17:12.293247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.293277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.305436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.175 [2024-07-14 03:17:12.305772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.305815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.318555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.175 [2024-07-14 03:17:12.319067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.175 [2024-07-14 03:17:12.319095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.175 [2024-07-14 03:17:12.331907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.332243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.332288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.345269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.345613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.345662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.358559] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.358890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.358935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.372033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.372392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.372440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.385238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.385549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.385577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.398543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.398954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.398982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.411803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.412151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.412180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.176 [2024-07-14 03:17:12.424898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.176 [2024-07-14 03:17:12.425253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.176 [2024-07-14 03:17:12.425280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.438811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.439180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.439208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.451924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.452241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.452268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.464543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.464856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.464893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.477115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.477415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.477443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.489957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.490302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:7963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.490329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.503281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.503597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.503625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.516507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.516838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.516891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.529586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.529993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.530022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.542938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.543281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.543309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.556054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.556371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:22670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.556399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.569237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.569599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.569643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.582670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.583033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.596076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.596402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:8438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.596430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.609529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.609841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.609890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.622678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.623046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.623073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.636037] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.636384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.636411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.649245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.649544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.649569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.662440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.662752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.662778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.435 [2024-07-14 03:17:12.675759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.435 [2024-07-14 03:17:12.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.435 [2024-07-14 03:17:12.676109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.694 [2024-07-14 03:17:12.689366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.694 [2024-07-14 03:17:12.689685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.694 [2024-07-14 03:17:12.689712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.694 [2024-07-14 03:17:12.702794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.694 [2024-07-14 03:17:12.703137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.694 [2024-07-14 03:17:12.703165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.694 [2024-07-14 03:17:12.715861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.694 [2024-07-14 03:17:12.716206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.694 [2024-07-14 03:17:12.716233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.694 [2024-07-14 03:17:12.729156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.729470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.729497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.742173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.742566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.742594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.755397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.755715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.755742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.768473] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.768821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.768848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.781592] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.781989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.794842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.795166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.795194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.808174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.808526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.808557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.821268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.821592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.821633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.834480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.834794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.834821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.847618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.847997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.848025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.860992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.861317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.861358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.874153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.874490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.874517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.887315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.887630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.887658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.900597] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.900953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.900996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.913935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.914249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.914276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.927166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.927505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.927532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.695 [2024-07-14 03:17:12.940405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.695 [2024-07-14 03:17:12.940745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:10462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.695 [2024-07-14 03:17:12.940772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:12.954083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:12.954400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:12.954427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:12.967401] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:12.967749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:12.967776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:12.980484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:12.980809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:12.980852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:12.993895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:12.994256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:12.994284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.006953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.007287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.007314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.020316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.020682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:10397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.020709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.033623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.033953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.033996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.046766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.047093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.047120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.060055] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.060464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:11261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.060491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.073139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.073554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.073581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.086315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.086668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.086695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.099612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.099973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.100001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.112881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.113218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.113245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.126279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.126671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.126700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.139608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.139952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.139996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.152723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.153060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.153092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.165908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.166252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.179081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.179395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.179422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.192082] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.192414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:24851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.192442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.956 [2024-07-14 03:17:13.205656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:17.956 [2024-07-14 03:17:13.206010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.956 [2024-07-14 03:17:13.206038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.219115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.219457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:10676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.219498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.232263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.232622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.232648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.245545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.245892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.245918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.258577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.258933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.258961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.271780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.272138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.272182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.285114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.285457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.285484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.298432] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.298775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.298801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.311642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.312080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.312108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.324928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.325331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.325358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.338209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.338600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.338628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.351349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.351721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.351749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.364822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.365145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.365187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.378207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.378529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.378557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.391632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.391978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.392006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.405060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.405438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.418323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.418637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.418664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.431546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.431863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.431897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.444560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.444883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.444910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.216 [2024-07-14 03:17:13.457609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.216 [2024-07-14 03:17:13.457954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:13095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.216 [2024-07-14 03:17:13.457997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.474 [2024-07-14 03:17:13.471058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.474 [2024-07-14 03:17:13.471475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.474 [2024-07-14 03:17:13.471503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.474 [2024-07-14 03:17:13.484467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.474 [2024-07-14 03:17:13.484782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.474 [2024-07-14 03:17:13.484809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.474 [2024-07-14 03:17:13.497721] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.474 [2024-07-14 03:17:13.498059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.475 [2024-07-14 03:17:13.498092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.475 [2024-07-14 03:17:13.510981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.475 [2024-07-14 03:17:13.511305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.475 [2024-07-14 03:17:13.511345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.475 [2024-07-14 03:17:13.524102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.475 [2024-07-14 03:17:13.524438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.475 [2024-07-14 03:17:13.524465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.475 [2024-07-14 03:17:13.537591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.475 [2024-07-14 03:17:13.537950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.475 [2024-07-14 03:17:13.537977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.475 [2024-07-14 03:17:13.550776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493780) with pdu=0x2000190f6020 00:29:18.475 [2024-07-14 03:17:13.551099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:18.475 [2024-07-14 03:17:13.551141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:18.475 00:29:18.475 Latency(us) 00:29:18.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.475 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:18.475 nvme0n1 : 2.01 19750.50 77.15 0.00 0.00 6466.63 3203.98 17670.45 00:29:18.475 =================================================================================================================== 00:29:18.475 Total : 19750.50 77.15 0.00 0.00 6466.63 3203.98 17670.45 00:29:18.475 0 00:29:18.475 03:17:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:18.475 03:17:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:18.475 03:17:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:18.475 03:17:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:18.475 | .driver_specific 00:29:18.475 | .nvme_error 00:29:18.475 | .status_code 00:29:18.475 | .command_transient_transport_error' 00:29:18.734 03:17:13 -- host/digest.sh@71 -- # (( 155 > 0 )) 00:29:18.734 03:17:13 -- host/digest.sh@73 -- # killprocess 2124458 00:29:18.734 03:17:13 -- common/autotest_common.sh@926 -- # '[' -z 2124458 ']' 00:29:18.734 03:17:13 -- common/autotest_common.sh@930 -- # kill -0 2124458 00:29:18.734 03:17:13 -- common/autotest_common.sh@931 -- # uname 00:29:18.734 03:17:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:18.734 03:17:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2124458 00:29:18.734 03:17:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:18.734 03:17:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:18.734 03:17:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2124458' 00:29:18.734 killing process with pid 2124458 00:29:18.734 03:17:13 -- common/autotest_common.sh@945 -- # kill 2124458 00:29:18.734 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.734 00:29:18.734 Latency(us) 00:29:18.734 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.734 =================================================================================================================== 00:29:18.734 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.734 03:17:13 -- common/autotest_common.sh@950 -- # wait 2124458 00:29:18.991 03:17:14 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:18.991 03:17:14 -- host/digest.sh@54 -- # local rw bs qd 00:29:18.991 03:17:14 -- host/digest.sh@56 -- # rw=randwrite 00:29:18.991 03:17:14 -- host/digest.sh@56 -- # bs=131072 00:29:18.992 03:17:14 -- host/digest.sh@56 -- # qd=16 00:29:18.992 03:17:14 -- host/digest.sh@58 -- # bperfpid=2125009 00:29:18.992 03:17:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:18.992 03:17:14 -- host/digest.sh@60 -- # waitforlisten 2125009 /var/tmp/bperf.sock 00:29:18.992 03:17:14 -- common/autotest_common.sh@819 -- # '[' -z 2125009 ']' 00:29:18.992 03:17:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.992 03:17:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:18.992 03:17:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.992 03:17:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:18.992 03:17:14 -- common/autotest_common.sh@10 -- # set +x 00:29:18.992 [2024-07-14 03:17:14.126792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:18.992 [2024-07-14 03:17:14.126897] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2125009 ] 00:29:18.992 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:18.992 Zero copy mechanism will not be used. 00:29:18.992 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.992 [2024-07-14 03:17:14.187519] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:19.250 [2024-07-14 03:17:14.272568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.185 03:17:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.185 03:17:15 -- common/autotest_common.sh@852 -- # return 0 00:29:20.185 03:17:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.185 03:17:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:20.185 03:17:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:20.185 03:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.185 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:29:20.185 03:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.185 03:17:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.185 03:17:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:20.444 nvme0n1 00:29:20.703 03:17:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:20.703 03:17:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.703 03:17:15 -- common/autotest_common.sh@10 -- # set +x 00:29:20.703 03:17:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.703 03:17:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:20.703 03:17:15 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:20.703 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.703 Zero copy mechanism will not be used. 00:29:20.703 Running I/O for 2 seconds... 00:29:20.703 [2024-07-14 03:17:15.860876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.703 [2024-07-14 03:17:15.861453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.703 [2024-07-14 03:17:15.861505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.703 [2024-07-14 03:17:15.886319] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.703 [2024-07-14 03:17:15.887368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.703 [2024-07-14 03:17:15.887412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.703 [2024-07-14 03:17:15.915570] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.703 [2024-07-14 03:17:15.916347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.703 [2024-07-14 03:17:15.916386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.703 [2024-07-14 03:17:15.944118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.703 [2024-07-14 03:17:15.944697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.703 [2024-07-14 03:17:15.944733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:15.971308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:15.971970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:15.972018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:15.997266] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:15.997853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:15.997901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.021547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.022204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.022241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.051373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.052328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.052380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.077048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.077773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.077812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.105253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.106223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.106262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.135608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.136290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.136328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.165263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.166110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.166148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:20.962 [2024-07-14 03:17:16.190928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:20.962 [2024-07-14 03:17:16.191575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:20.962 [2024-07-14 03:17:16.191613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.216839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.217733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.217784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.246122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.246811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.246862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.274539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.275556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.275593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.303449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.304598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.304637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.332729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.333697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.333739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.361063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.361625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.361662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.386118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.386842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.386897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.412502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.413306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.413343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.440449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.441058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.441096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.221 [2024-07-14 03:17:16.467902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.221 [2024-07-14 03:17:16.468614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.221 [2024-07-14 03:17:16.468650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.479 [2024-07-14 03:17:16.497467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.498258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.498297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.524881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.525447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.525484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.553371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.554091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.554129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.582031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.582683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.582719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.611351] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.612247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.612284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.642533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.643182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.643233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.671805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.672490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.672527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.700340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.700970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.701008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.480 [2024-07-14 03:17:16.725442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.480 [2024-07-14 03:17:16.726021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.480 [2024-07-14 03:17:16.726059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.748018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.748785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.748837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.773847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.774532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.801645] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.802649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.802710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.826667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.827237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.827288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.852349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.853235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.853271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.879702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.880379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.880416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.907077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.907732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.907783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.934386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.935080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.935118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.962491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.963488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.963526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:21.740 [2024-07-14 03:17:16.991160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:21.740 [2024-07-14 03:17:16.991651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:21.740 [2024-07-14 03:17:16.991700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.019709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.020489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.020526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.048014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.048801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.048838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.075360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.075970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.076008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.102926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.103690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.103727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.132115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.132599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.132638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.157752] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.158560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.158597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.186138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.186724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.186773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.210256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.211165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.211216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.000 [2024-07-14 03:17:17.235814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.000 [2024-07-14 03:17:17.236520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.000 [2024-07-14 03:17:17.236556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.264849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.265730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.265781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.293441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.294213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.294251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.321575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.322255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.349519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.349903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.349940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.376879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.377880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.377918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.404986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.405801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.405852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.431758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.432632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.432671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.461086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.461747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.461784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.261 [2024-07-14 03:17:17.489060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.261 [2024-07-14 03:17:17.489949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.261 [2024-07-14 03:17:17.489988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.517982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.518756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.518800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.546264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.547029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.547067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.574884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.575586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.575623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.603713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.604617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.604654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.631707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.632401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.632438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.659687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.660628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.660666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.687388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.521 [2024-07-14 03:17:17.688130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.521 [2024-07-14 03:17:17.688168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.521 [2024-07-14 03:17:17.714574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.522 [2024-07-14 03:17:17.715444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.522 [2024-07-14 03:17:17.715480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.522 [2024-07-14 03:17:17.742801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.522 [2024-07-14 03:17:17.743383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.522 [2024-07-14 03:17:17.743433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:22.522 [2024-07-14 03:17:17.768708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.522 [2024-07-14 03:17:17.769307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.522 [2024-07-14 03:17:17.769344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:22.781 [2024-07-14 03:17:17.796405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.781 [2024-07-14 03:17:17.797304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.781 [2024-07-14 03:17:17.797340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:22.781 [2024-07-14 03:17:17.824651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2493a50) with pdu=0x2000190fef90 00:29:22.781 [2024-07-14 03:17:17.825345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.781 [2024-07-14 03:17:17.825396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.781 00:29:22.781 Latency(us) 00:29:22.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:22.781 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:22.781 nvme0n1 : 2.01 1119.68 139.96 0.00 0.00 14237.78 10582.85 33204.91 00:29:22.781 =================================================================================================================== 00:29:22.781 Total : 1119.68 139.96 0.00 0.00 14237.78 10582.85 33204.91 00:29:22.781 0 00:29:22.781 03:17:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:22.781 03:17:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:22.781 03:17:17 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:22.781 03:17:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:22.781 | .driver_specific 00:29:22.781 | .nvme_error 00:29:22.781 | .status_code 00:29:22.781 | .command_transient_transport_error' 00:29:23.039 03:17:18 -- host/digest.sh@71 -- # (( 72 > 0 )) 00:29:23.039 03:17:18 -- host/digest.sh@73 -- # killprocess 2125009 00:29:23.039 03:17:18 -- common/autotest_common.sh@926 -- # '[' -z 2125009 ']' 00:29:23.039 03:17:18 -- common/autotest_common.sh@930 -- # kill -0 2125009 00:29:23.039 03:17:18 -- common/autotest_common.sh@931 -- # uname 00:29:23.039 03:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.039 03:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2125009 00:29:23.039 03:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:23.039 03:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:23.039 03:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2125009' 00:29:23.039 killing process with pid 2125009 00:29:23.039 03:17:18 -- common/autotest_common.sh@945 -- # kill 2125009 00:29:23.039 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.039 00:29:23.039 Latency(us) 00:29:23.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.039 =================================================================================================================== 00:29:23.039 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.039 03:17:18 -- common/autotest_common.sh@950 -- # wait 2125009 00:29:23.297 03:17:18 -- host/digest.sh@115 -- # killprocess 2123332 00:29:23.297 03:17:18 -- common/autotest_common.sh@926 -- # '[' -z 2123332 ']' 00:29:23.297 03:17:18 -- common/autotest_common.sh@930 -- # kill -0 2123332 00:29:23.297 03:17:18 -- common/autotest_common.sh@931 -- # uname 00:29:23.297 03:17:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:23.297 03:17:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2123332 00:29:23.297 03:17:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:23.297 03:17:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:23.297 03:17:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2123332' 00:29:23.297 killing process with pid 2123332 00:29:23.297 03:17:18 -- common/autotest_common.sh@945 -- # kill 2123332 00:29:23.297 03:17:18 -- common/autotest_common.sh@950 -- # wait 2123332 00:29:23.559 00:29:23.559 real 0m17.628s 00:29:23.559 user 0m36.214s 00:29:23.559 sys 0m3.924s 00:29:23.559 03:17:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:23.559 03:17:18 -- common/autotest_common.sh@10 -- # set +x 00:29:23.559 ************************************ 00:29:23.559 END TEST nvmf_digest_error 00:29:23.559 ************************************ 00:29:23.559 03:17:18 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:23.559 03:17:18 -- host/digest.sh@139 -- # nvmftestfini 00:29:23.559 03:17:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:23.559 03:17:18 -- nvmf/common.sh@116 -- # sync 00:29:23.559 03:17:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:23.559 03:17:18 -- nvmf/common.sh@119 -- # set +e 00:29:23.559 03:17:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:23.559 03:17:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:23.559 rmmod nvme_tcp 00:29:23.559 rmmod nvme_fabrics 00:29:23.559 rmmod nvme_keyring 00:29:23.559 03:17:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:23.559 03:17:18 -- nvmf/common.sh@123 -- # set -e 00:29:23.559 03:17:18 -- nvmf/common.sh@124 -- # return 0 00:29:23.559 03:17:18 -- nvmf/common.sh@477 -- # '[' -n 2123332 ']' 00:29:23.559 03:17:18 -- nvmf/common.sh@478 -- # killprocess 2123332 00:29:23.559 03:17:18 -- common/autotest_common.sh@926 -- # '[' -z 2123332 ']' 00:29:23.559 03:17:18 -- common/autotest_common.sh@930 -- # kill -0 2123332 00:29:23.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2123332) - No such process 00:29:23.559 03:17:18 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2123332 is not found' 00:29:23.559 Process with pid 2123332 is not found 00:29:23.559 03:17:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:23.559 03:17:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:23.559 03:17:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:23.559 03:17:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:23.559 03:17:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:23.559 03:17:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.559 03:17:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:23.559 03:17:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.462 03:17:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:25.462 00:29:25.462 real 0m37.148s 00:29:25.462 user 1m7.630s 00:29:25.462 sys 0m9.184s 00:29:25.462 03:17:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.462 03:17:20 -- common/autotest_common.sh@10 -- # set +x 00:29:25.462 ************************************ 00:29:25.462 END TEST nvmf_digest 00:29:25.462 ************************************ 00:29:25.462 03:17:20 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:25.462 03:17:20 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:25.462 03:17:20 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:25.462 03:17:20 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:25.462 03:17:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:25.462 03:17:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:25.462 03:17:20 -- common/autotest_common.sh@10 -- # set +x 00:29:25.462 ************************************ 00:29:25.462 START TEST nvmf_bdevperf 00:29:25.462 ************************************ 00:29:25.462 03:17:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:25.720 * Looking for test storage... 00:29:25.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:25.720 03:17:20 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.720 03:17:20 -- nvmf/common.sh@7 -- # uname -s 00:29:25.720 03:17:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.720 03:17:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.720 03:17:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.720 03:17:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.720 03:17:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.720 03:17:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.720 03:17:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.720 03:17:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.720 03:17:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.720 03:17:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.720 03:17:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:25.720 03:17:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:25.720 03:17:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.720 03:17:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.720 03:17:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.720 03:17:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.720 03:17:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.720 03:17:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.720 03:17:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.720 03:17:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.721 03:17:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.721 03:17:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.721 03:17:20 -- paths/export.sh@5 -- # export PATH 00:29:25.721 03:17:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.721 03:17:20 -- nvmf/common.sh@46 -- # : 0 00:29:25.721 03:17:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:25.721 03:17:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:25.721 03:17:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:25.721 03:17:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.721 03:17:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.721 03:17:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:25.721 03:17:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:25.721 03:17:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:25.721 03:17:20 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:25.721 03:17:20 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:25.721 03:17:20 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:25.721 03:17:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:25.721 03:17:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.721 03:17:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:25.721 03:17:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:25.721 03:17:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:25.721 03:17:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.721 03:17:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:25.721 03:17:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.721 03:17:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:25.721 03:17:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:25.721 03:17:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:25.721 03:17:20 -- common/autotest_common.sh@10 -- # set +x 00:29:27.620 03:17:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:27.620 03:17:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:27.620 03:17:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:27.620 03:17:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:27.620 03:17:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:27.620 03:17:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:27.620 03:17:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:27.620 03:17:22 -- nvmf/common.sh@294 -- # net_devs=() 00:29:27.620 03:17:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:27.620 03:17:22 -- nvmf/common.sh@295 -- # e810=() 00:29:27.620 03:17:22 -- nvmf/common.sh@295 -- # local -ga e810 00:29:27.620 03:17:22 -- nvmf/common.sh@296 -- # x722=() 00:29:27.620 03:17:22 -- nvmf/common.sh@296 -- # local -ga x722 00:29:27.620 03:17:22 -- nvmf/common.sh@297 -- # mlx=() 00:29:27.620 03:17:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:27.620 03:17:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.620 03:17:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:27.620 03:17:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:27.620 03:17:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:27.620 03:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:27.620 03:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:27.620 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:27.620 03:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:27.620 03:17:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:27.620 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:27.620 03:17:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:27.620 03:17:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:27.620 03:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:27.620 03:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.620 03:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:27.620 03:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.620 03:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:27.620 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:27.620 03:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.620 03:17:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:27.620 03:17:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.620 03:17:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:27.620 03:17:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.620 03:17:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:27.620 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:27.620 03:17:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.620 03:17:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:27.621 03:17:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:27.621 03:17:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:27.621 03:17:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:27.621 03:17:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:27.621 03:17:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.621 03:17:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.621 03:17:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.621 03:17:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:27.621 03:17:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.621 03:17:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.621 03:17:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:27.621 03:17:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.621 03:17:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.621 03:17:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:27.621 03:17:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:27.621 03:17:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.621 03:17:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.621 03:17:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.621 03:17:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.621 03:17:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:27.621 03:17:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.621 03:17:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.621 03:17:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.621 03:17:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:27.621 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.621 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:29:27.621 00:29:27.621 --- 10.0.0.2 ping statistics --- 00:29:27.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.621 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:27.621 03:17:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.621 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.621 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:29:27.621 00:29:27.621 --- 10.0.0.1 ping statistics --- 00:29:27.621 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.621 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:29:27.621 03:17:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.621 03:17:22 -- nvmf/common.sh@410 -- # return 0 00:29:27.621 03:17:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:27.621 03:17:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.621 03:17:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:27.621 03:17:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:27.621 03:17:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.621 03:17:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:27.621 03:17:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:27.621 03:17:22 -- host/bdevperf.sh@25 -- # tgt_init 00:29:27.621 03:17:22 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:27.621 03:17:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:27.621 03:17:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:27.621 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:29:27.621 03:17:22 -- nvmf/common.sh@469 -- # nvmfpid=2127511 00:29:27.621 03:17:22 -- nvmf/common.sh@470 -- # waitforlisten 2127511 00:29:27.621 03:17:22 -- common/autotest_common.sh@819 -- # '[' -z 2127511 ']' 00:29:27.621 03:17:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.621 03:17:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:27.621 03:17:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:27.621 03:17:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.621 03:17:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:27.621 03:17:22 -- common/autotest_common.sh@10 -- # set +x 00:29:27.880 [2024-07-14 03:17:22.892186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:27.880 [2024-07-14 03:17:22.892277] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.880 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.880 [2024-07-14 03:17:22.961695] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:27.880 [2024-07-14 03:17:23.050824] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:27.880 [2024-07-14 03:17:23.051012] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.880 [2024-07-14 03:17:23.051029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.880 [2024-07-14 03:17:23.051041] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.880 [2024-07-14 03:17:23.051225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.880 [2024-07-14 03:17:23.051280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.880 [2024-07-14 03:17:23.051284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.854 03:17:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:28.854 03:17:23 -- common/autotest_common.sh@852 -- # return 0 00:29:28.854 03:17:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:28.854 03:17:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 03:17:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:28.854 03:17:23 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:28.854 03:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 [2024-07-14 03:17:23.843029] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:28.854 03:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.854 03:17:23 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:28.854 03:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 Malloc0 00:29:28.854 03:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.854 03:17:23 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:28.854 03:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 03:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.854 03:17:23 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:28.854 03:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 03:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.854 03:17:23 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:28.854 03:17:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:28.854 03:17:23 -- common/autotest_common.sh@10 -- # set +x 00:29:28.854 [2024-07-14 03:17:23.910369] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:28.854 03:17:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:28.854 03:17:23 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:28.854 03:17:23 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:28.854 03:17:23 -- nvmf/common.sh@520 -- # config=() 00:29:28.854 03:17:23 -- nvmf/common.sh@520 -- # local subsystem config 00:29:28.854 03:17:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:28.854 03:17:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:28.854 { 00:29:28.854 "params": { 00:29:28.854 "name": "Nvme$subsystem", 00:29:28.854 "trtype": "$TEST_TRANSPORT", 00:29:28.854 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:28.854 "adrfam": "ipv4", 00:29:28.854 "trsvcid": "$NVMF_PORT", 00:29:28.854 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:28.854 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:28.854 "hdgst": ${hdgst:-false}, 00:29:28.854 "ddgst": ${ddgst:-false} 00:29:28.854 }, 00:29:28.854 "method": "bdev_nvme_attach_controller" 00:29:28.854 } 00:29:28.854 EOF 00:29:28.854 )") 00:29:28.854 03:17:23 -- nvmf/common.sh@542 -- # cat 00:29:28.854 03:17:23 -- nvmf/common.sh@544 -- # jq . 00:29:28.854 03:17:23 -- nvmf/common.sh@545 -- # IFS=, 00:29:28.854 03:17:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:28.854 "params": { 00:29:28.854 "name": "Nvme1", 00:29:28.854 "trtype": "tcp", 00:29:28.854 "traddr": "10.0.0.2", 00:29:28.854 "adrfam": "ipv4", 00:29:28.854 "trsvcid": "4420", 00:29:28.854 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:28.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:28.854 "hdgst": false, 00:29:28.854 "ddgst": false 00:29:28.854 }, 00:29:28.854 "method": "bdev_nvme_attach_controller" 00:29:28.854 }' 00:29:28.854 [2024-07-14 03:17:23.953491] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:28.854 [2024-07-14 03:17:23.953566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127648 ] 00:29:28.854 EAL: No free 2048 kB hugepages reported on node 1 00:29:28.854 [2024-07-14 03:17:24.013618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.115 [2024-07-14 03:17:24.098405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:29.115 Running I/O for 1 seconds... 00:29:30.057 00:29:30.057 Latency(us) 00:29:30.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.057 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:30.057 Verification LBA range: start 0x0 length 0x4000 00:29:30.057 Nvme1n1 : 1.01 13081.10 51.10 0.00 0.00 9748.71 1116.54 16505.36 00:29:30.057 =================================================================================================================== 00:29:30.057 Total : 13081.10 51.10 0.00 0.00 9748.71 1116.54 16505.36 00:29:30.316 03:17:25 -- host/bdevperf.sh@30 -- # bdevperfpid=2127814 00:29:30.316 03:17:25 -- host/bdevperf.sh@32 -- # sleep 3 00:29:30.316 03:17:25 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:30.316 03:17:25 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:30.316 03:17:25 -- nvmf/common.sh@520 -- # config=() 00:29:30.316 03:17:25 -- nvmf/common.sh@520 -- # local subsystem config 00:29:30.316 03:17:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:30.316 03:17:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:30.316 { 00:29:30.316 "params": { 00:29:30.316 "name": "Nvme$subsystem", 00:29:30.316 "trtype": "$TEST_TRANSPORT", 00:29:30.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:30.316 "adrfam": "ipv4", 00:29:30.316 "trsvcid": "$NVMF_PORT", 00:29:30.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:30.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:30.316 "hdgst": ${hdgst:-false}, 00:29:30.316 "ddgst": ${ddgst:-false} 00:29:30.316 }, 00:29:30.316 "method": "bdev_nvme_attach_controller" 00:29:30.316 } 00:29:30.316 EOF 00:29:30.316 )") 00:29:30.316 03:17:25 -- nvmf/common.sh@542 -- # cat 00:29:30.316 03:17:25 -- nvmf/common.sh@544 -- # jq . 00:29:30.316 03:17:25 -- nvmf/common.sh@545 -- # IFS=, 00:29:30.316 03:17:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:30.316 "params": { 00:29:30.316 "name": "Nvme1", 00:29:30.316 "trtype": "tcp", 00:29:30.316 "traddr": "10.0.0.2", 00:29:30.316 "adrfam": "ipv4", 00:29:30.316 "trsvcid": "4420", 00:29:30.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:30.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:30.316 "hdgst": false, 00:29:30.316 "ddgst": false 00:29:30.316 }, 00:29:30.316 "method": "bdev_nvme_attach_controller" 00:29:30.316 }' 00:29:30.316 [2024-07-14 03:17:25.530341] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:30.316 [2024-07-14 03:17:25.530419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127814 ] 00:29:30.316 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.574 [2024-07-14 03:17:25.591225] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.574 [2024-07-14 03:17:25.673207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.832 Running I/O for 15 seconds... 00:29:33.369 03:17:28 -- host/bdevperf.sh@33 -- # kill -9 2127511 00:29:33.369 03:17:28 -- host/bdevperf.sh@35 -- # sleep 3 00:29:33.369 [2024-07-14 03:17:28.503993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:8400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.369 [2024-07-14 03:17:28.504319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.369 [2024-07-14 03:17:28.504334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:8488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:8568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.504975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.504989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.370 [2024-07-14 03:17:28.505493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.370 [2024-07-14 03:17:28.505508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:9256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.505774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:9296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.505883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.505933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.505981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.505995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:8872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:9384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.371 [2024-07-14 03:17:28.506643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.371 [2024-07-14 03:17:28.506677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.371 [2024-07-14 03:17:28.506705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.506722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:8984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.506983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.506997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:9432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:9440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:9448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:9536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.372 [2024-07-14 03:17:28.507723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.372 [2024-07-14 03:17:28.507806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:9000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.372 [2024-07-14 03:17:28.507821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.507838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.507853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.507878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.507914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.507931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.507946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.507961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.507975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.507990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:9104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.373 [2024-07-14 03:17:28.508091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:33.373 [2024-07-14 03:17:28.508120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:9152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.373 [2024-07-14 03:17:28.508364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10df450 is same with the state(5) to be set 00:29:33.373 [2024-07-14 03:17:28.508397] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:33.373 [2024-07-14 03:17:28.508410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:33.373 [2024-07-14 03:17:28.508423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:8 PRP1 0x0 PRP2 0x0 00:29:33.373 [2024-07-14 03:17:28.508437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508501] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10df450 was disconnected and freed. reset controller. 00:29:33.373 [2024-07-14 03:17:28.508577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.373 [2024-07-14 03:17:28.508600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.373 [2024-07-14 03:17:28.508632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.373 [2024-07-14 03:17:28.508662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.373 [2024-07-14 03:17:28.508691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.373 [2024-07-14 03:17:28.508705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.373 [2024-07-14 03:17:28.511079] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.373 [2024-07-14 03:17:28.511115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.373 [2024-07-14 03:17:28.511785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.373 [2024-07-14 03:17:28.512014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.373 [2024-07-14 03:17:28.512041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.373 [2024-07-14 03:17:28.512057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.373 [2024-07-14 03:17:28.512230] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.373 [2024-07-14 03:17:28.512400] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.373 [2024-07-14 03:17:28.512424] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.373 [2024-07-14 03:17:28.512447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.373 [2024-07-14 03:17:28.514863] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.373 [2024-07-14 03:17:28.524022] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.373 [2024-07-14 03:17:28.524472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.373 [2024-07-14 03:17:28.524855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.373 [2024-07-14 03:17:28.524949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.373 [2024-07-14 03:17:28.524984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.373 [2024-07-14 03:17:28.525166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.373 [2024-07-14 03:17:28.525359] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.373 [2024-07-14 03:17:28.525383] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.525399] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.527792] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.536786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.537182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.537452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.537480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.537498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.537646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.537743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.537766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.537782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.540066] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.549402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.549732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.549938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.549966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.549984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.550095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.550282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.550307] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.550323] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.552543] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.561821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.562232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.562468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.562496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.562513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.562697] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.562878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.562915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.562929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.565396] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.574345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.574913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.575108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.575133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.575149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.575333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.575504] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.575527] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.575543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.577984] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.587107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.587486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.587710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.587761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.587779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.587964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.588097] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.588118] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.588132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.590536] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.599758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.600139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.600349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.600377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.600394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.600577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.600747] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.600771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.600787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.603261] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.374 [2024-07-14 03:17:28.612225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.374 [2024-07-14 03:17:28.612673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.612925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.374 [2024-07-14 03:17:28.612954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.374 [2024-07-14 03:17:28.612971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.374 [2024-07-14 03:17:28.613119] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.374 [2024-07-14 03:17:28.613288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.374 [2024-07-14 03:17:28.613313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.374 [2024-07-14 03:17:28.613329] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.374 [2024-07-14 03:17:28.615658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.624686] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.625054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.625457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.625519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.636 [2024-07-14 03:17:28.625537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.636 [2024-07-14 03:17:28.625702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.636 [2024-07-14 03:17:28.625887] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.636 [2024-07-14 03:17:28.625912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.636 [2024-07-14 03:17:28.625928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.636 [2024-07-14 03:17:28.628158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.637253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.637649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.637881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.637911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.636 [2024-07-14 03:17:28.637929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.636 [2024-07-14 03:17:28.638094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.636 [2024-07-14 03:17:28.638264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.636 [2024-07-14 03:17:28.638288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.636 [2024-07-14 03:17:28.638303] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.636 [2024-07-14 03:17:28.640631] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.649976] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.650376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.650588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.650617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.636 [2024-07-14 03:17:28.650634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.636 [2024-07-14 03:17:28.650836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.636 [2024-07-14 03:17:28.651016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.636 [2024-07-14 03:17:28.651041] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.636 [2024-07-14 03:17:28.651057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.636 [2024-07-14 03:17:28.653502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.662589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.662978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.663176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.663204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.636 [2024-07-14 03:17:28.663222] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.636 [2024-07-14 03:17:28.663405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.636 [2024-07-14 03:17:28.663574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.636 [2024-07-14 03:17:28.663598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.636 [2024-07-14 03:17:28.663614] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.636 [2024-07-14 03:17:28.665849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.675417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.675849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.676090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.676120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.636 [2024-07-14 03:17:28.676137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.636 [2024-07-14 03:17:28.676285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.636 [2024-07-14 03:17:28.676472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.636 [2024-07-14 03:17:28.676496] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.636 [2024-07-14 03:17:28.676512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.636 [2024-07-14 03:17:28.678912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.636 [2024-07-14 03:17:28.688081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.636 [2024-07-14 03:17:28.688461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.688685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.636 [2024-07-14 03:17:28.688713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.688730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.688944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.689078] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.689102] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.689118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.691256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.701023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.701369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.701527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.701569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.701586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.701734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.701915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.701940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.701955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.704203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.713436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.713786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.713992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.714022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.714044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.714211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.714380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.714404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.714419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.716959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.726125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.726484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.726685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.726714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.726731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.726891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.727079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.727103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.727118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.729325] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.739010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.739327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.739598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.739646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.739663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.739811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.739956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.739981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.739996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.742423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.751412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.751829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.752069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.752099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.752116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.752251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.752420] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.752444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.752459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.754852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.763913] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.764269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.764570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.764622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.764639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.764822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.765010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.765034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.765050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.637 [2024-07-14 03:17:28.767245] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.637 [2024-07-14 03:17:28.776423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.637 [2024-07-14 03:17:28.776779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.776977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.637 [2024-07-14 03:17:28.777007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.637 [2024-07-14 03:17:28.777024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.637 [2024-07-14 03:17:28.777243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.637 [2024-07-14 03:17:28.777377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.637 [2024-07-14 03:17:28.777401] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.637 [2024-07-14 03:17:28.777416] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.779859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.789060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.789422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.789692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.789741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.789759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.789979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.790151] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.790174] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.790190] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.792487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.801668] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.802012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.802265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.802316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.802333] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.802516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.802704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.802728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.802743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.805001] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.814152] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.814479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.814669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.814697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.814714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.814892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.815026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.815049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.815065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.817383] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.826637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.827045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.827258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.827284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.827299] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.827496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.827689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.827713] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.827729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.830148] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.839240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.839645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.839849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.839888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.839907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.840090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.840224] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.840247] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.840263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.842474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.852021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.852396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.852798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.852885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.852906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.853091] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.853279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.853302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.853319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.855746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.864599] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.864977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.865153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.865181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.865198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.865310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.865461] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.865485] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.865506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.867740] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.638 [2024-07-14 03:17:28.877170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.638 [2024-07-14 03:17:28.877564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.877914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.638 [2024-07-14 03:17:28.877943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.638 [2024-07-14 03:17:28.877961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.638 [2024-07-14 03:17:28.878108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.638 [2024-07-14 03:17:28.878205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.638 [2024-07-14 03:17:28.878228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.638 [2024-07-14 03:17:28.878244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.638 [2024-07-14 03:17:28.880618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.899 [2024-07-14 03:17:28.889873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.890276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.890540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.890586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.890604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.890806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.891025] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.891050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.891066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.893423] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.902491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.902896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.903128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.903154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.903170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.903345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.903486] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.903510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.903531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.905821] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.915072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.915464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.915736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.915783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.915800] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.915959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.916129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.916153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.916169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.918470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.927573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.927972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.928188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.928216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.928234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.928399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.928569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.928593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.928609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.931065] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.940311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.940730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.940950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.940976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.940991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.941133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.941268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.941291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.941307] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.943588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.952950] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.953350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.953554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.953582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.953600] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.953766] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.953984] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.954008] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.954024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.956287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.965408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.965748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.965980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.966006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.966022] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.900 [2024-07-14 03:17:28.966170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.900 [2024-07-14 03:17:28.966358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.900 [2024-07-14 03:17:28.966382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.900 [2024-07-14 03:17:28.966398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.900 [2024-07-14 03:17:28.968719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.900 [2024-07-14 03:17:28.978106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.900 [2024-07-14 03:17:28.978497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.978803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.900 [2024-07-14 03:17:28.978855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.900 [2024-07-14 03:17:28.978885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:28.979033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:28.979221] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:28.979245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:28.979260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:28.981633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:28.990585] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:28.991018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:28.991375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:28.991425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:28.991442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:28.991662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:28.991814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:28.991838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:28.991853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:28.994073] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.003129] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.003478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.003783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.003840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.003858] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.004000] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.004188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.004212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.004228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.006545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.015646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.016033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.016271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.016323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.016341] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.016507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.016676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.016700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.016715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.019131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.028271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.028577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.028875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.028923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.028941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.029124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.029312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.029335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.029351] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.031669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.040907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.041233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.041526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.041576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.041594] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.041724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.041906] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.041931] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.041946] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.044283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.053652] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.054072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.054526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.054570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.054590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.054744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.054931] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.054956] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.054972] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.057221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.066226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.066644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.066841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.066877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.066901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.067056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.067262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.901 [2024-07-14 03:17:29.067285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.901 [2024-07-14 03:17:29.067301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.901 [2024-07-14 03:17:29.069568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.901 [2024-07-14 03:17:29.078756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.901 [2024-07-14 03:17:29.079148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.079351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.901 [2024-07-14 03:17:29.079380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.901 [2024-07-14 03:17:29.079398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.901 [2024-07-14 03:17:29.079564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.901 [2024-07-14 03:17:29.079716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.079739] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.079755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.082049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.902 [2024-07-14 03:17:29.091408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.902 [2024-07-14 03:17:29.091768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.091945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.091975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.902 [2024-07-14 03:17:29.091993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.902 [2024-07-14 03:17:29.092122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.902 [2024-07-14 03:17:29.092292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.092316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.092331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.094706] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.902 [2024-07-14 03:17:29.103788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.902 [2024-07-14 03:17:29.104200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.104400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.104427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.902 [2024-07-14 03:17:29.104450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.902 [2024-07-14 03:17:29.104617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.902 [2024-07-14 03:17:29.104823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.104846] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.104862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.107357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.902 [2024-07-14 03:17:29.116535] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.902 [2024-07-14 03:17:29.116922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.117148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.117173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.902 [2024-07-14 03:17:29.117189] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.902 [2024-07-14 03:17:29.117388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.902 [2024-07-14 03:17:29.117541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.117564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.117580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.119893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.902 [2024-07-14 03:17:29.129238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.902 [2024-07-14 03:17:29.129632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.129847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.129881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.902 [2024-07-14 03:17:29.129899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.902 [2024-07-14 03:17:29.130089] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.902 [2024-07-14 03:17:29.130259] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.130282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.130298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.132652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:33.902 [2024-07-14 03:17:29.141780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:33.902 [2024-07-14 03:17:29.142112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.142498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.902 [2024-07-14 03:17:29.142555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:33.902 [2024-07-14 03:17:29.142573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:33.902 [2024-07-14 03:17:29.142761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:33.902 [2024-07-14 03:17:29.142962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:33.902 [2024-07-14 03:17:29.142986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:33.902 [2024-07-14 03:17:29.143002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:33.902 [2024-07-14 03:17:29.145248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.162 [2024-07-14 03:17:29.154431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.162 [2024-07-14 03:17:29.154942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.155155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.155184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.162 [2024-07-14 03:17:29.155201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.162 [2024-07-14 03:17:29.155420] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.162 [2024-07-14 03:17:29.155591] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.162 [2024-07-14 03:17:29.155614] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.162 [2024-07-14 03:17:29.155630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.162 [2024-07-14 03:17:29.157959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.162 [2024-07-14 03:17:29.167186] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.162 [2024-07-14 03:17:29.167597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.167829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.167857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.162 [2024-07-14 03:17:29.167887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.162 [2024-07-14 03:17:29.168054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.162 [2024-07-14 03:17:29.168206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.162 [2024-07-14 03:17:29.168230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.162 [2024-07-14 03:17:29.168245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.162 [2024-07-14 03:17:29.170458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.162 [2024-07-14 03:17:29.179863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.162 [2024-07-14 03:17:29.180250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.180501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.180557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.162 [2024-07-14 03:17:29.180575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.162 [2024-07-14 03:17:29.180704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.162 [2024-07-14 03:17:29.180908] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.162 [2024-07-14 03:17:29.180933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.162 [2024-07-14 03:17:29.180948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.162 [2024-07-14 03:17:29.183087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.162 [2024-07-14 03:17:29.192504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.162 [2024-07-14 03:17:29.192854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.193075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.193101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.162 [2024-07-14 03:17:29.193116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.162 [2024-07-14 03:17:29.193282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.162 [2024-07-14 03:17:29.193471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.162 [2024-07-14 03:17:29.193495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.162 [2024-07-14 03:17:29.193511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.162 [2024-07-14 03:17:29.195840] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.162 [2024-07-14 03:17:29.204964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.162 [2024-07-14 03:17:29.205536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.205917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.162 [2024-07-14 03:17:29.205946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.162 [2024-07-14 03:17:29.205964] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.162 [2024-07-14 03:17:29.206147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.162 [2024-07-14 03:17:29.206263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.162 [2024-07-14 03:17:29.206286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.162 [2024-07-14 03:17:29.206302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.162 [2024-07-14 03:17:29.208499] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.217804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.218210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.218483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.218532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.218550] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.218661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.218849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.218889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.218907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.221208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.230424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.230936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.231160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.231189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.231206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.231371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.231540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.231564] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.231580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.233814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.243181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.243731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.243939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.243968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.243986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.244133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.244266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.244290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.244306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.246595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.255843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.256214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.256525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.256584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.256601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.256785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.256947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.256973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.256994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.259277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.268495] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.268913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.269094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.269121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.269138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.269305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.269494] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.269518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.269534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.271917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.281122] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.281532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.281752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.281778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.281793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.281991] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.282180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.282204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.282220] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.284502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.293770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.294177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.294446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.294492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.294510] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.294638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.294827] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.294850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.294874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.297128] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.306455] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.306807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.306991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.307020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.307037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.307220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.307408] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.307432] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.307448] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.309675] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.319157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.319484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.319685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.319713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.319730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.319904] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.320075] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.320099] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.320116] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.322468] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.163 [2024-07-14 03:17:29.331734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.163 [2024-07-14 03:17:29.332100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.332369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.163 [2024-07-14 03:17:29.332397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.163 [2024-07-14 03:17:29.332415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.163 [2024-07-14 03:17:29.332544] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.163 [2024-07-14 03:17:29.332777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.163 [2024-07-14 03:17:29.332801] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.163 [2024-07-14 03:17:29.332817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.163 [2024-07-14 03:17:29.335051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.344239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.344599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.344799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.344825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.344841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.345012] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.345164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.345188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.345204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.347466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.357054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.357445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.357645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.357670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.357686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.357906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.358095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.358119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.358135] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.360416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.369556] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.369899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.370057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.370085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.370103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.370268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.370419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.370443] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.370458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.372820] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.382133] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.382526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.382941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.382971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.382989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.383118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.383330] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.383355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.383370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.385710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.394723] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.395104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.395410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.395462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.395480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.395645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.395815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.395838] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.395854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.398184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.164 [2024-07-14 03:17:29.407314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.164 [2024-07-14 03:17:29.407677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.407852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.164 [2024-07-14 03:17:29.407888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.164 [2024-07-14 03:17:29.407906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.164 [2024-07-14 03:17:29.408090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.164 [2024-07-14 03:17:29.408241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.164 [2024-07-14 03:17:29.408265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.164 [2024-07-14 03:17:29.408281] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.164 [2024-07-14 03:17:29.410625] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.419910] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.420336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.420548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.420594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.420616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.420764] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.420964] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.420989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.421005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.423089] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.432471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.432790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.433023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.433053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.433070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.433236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.433423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.433447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.433463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.435619] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.445106] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.445484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.445862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.445928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.445944] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.446113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.446319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.446344] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.446359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.448854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.457702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.458120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.458320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.458349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.458371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.458538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.458672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.458696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.458711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.461092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.470271] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.470642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.470874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.470903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.470920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.471067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.471219] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.471243] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.471259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.473488] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.482795] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.483236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.483436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.483464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.483482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.483629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.483781] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.483805] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.483821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.486113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.495405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.495837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.496025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.496051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.496067] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.496280] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.496463] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.496483] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.496497] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.498890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.508025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.508454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.508639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.508664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.508680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.508885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.509055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.509077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.509091] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.511139] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.520678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.521031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.521227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.521255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.521273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.424 [2024-07-14 03:17:29.521487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.424 [2024-07-14 03:17:29.521714] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.424 [2024-07-14 03:17:29.521735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.424 [2024-07-14 03:17:29.521763] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.424 [2024-07-14 03:17:29.524046] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.424 [2024-07-14 03:17:29.533398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.424 [2024-07-14 03:17:29.533763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.533956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.424 [2024-07-14 03:17:29.533984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.424 [2024-07-14 03:17:29.533999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.534163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.534358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.534382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.534398] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.536660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.545893] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.546238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.546488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.546513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.546544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.546703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.546892] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.546915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.546929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.549144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.558529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.558971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.559132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.559176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.559193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.559359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.559511] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.559535] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.559551] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.561846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.570995] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.571422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.571609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.571638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.571655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.571856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.572005] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.572033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.572048] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.574312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.583561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.583969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.584127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.584168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.584186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.584404] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.584592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.584616] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.584632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.587078] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.596091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.596439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.596701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.596748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.596766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.596957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.597127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.597164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.597180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.599610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.608759] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.609079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.609294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.609340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.609358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.609523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.609693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.609717] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.609738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.612017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.621582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.621921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.622083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.622109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.622124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.622276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.622446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.622470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.622486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.624799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.634147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.634524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.634733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.634755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.634770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.634958] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.635175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.635199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.635215] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.637479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.646694] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.647044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.647265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.647293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.647310] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.425 [2024-07-14 03:17:29.647457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.425 [2024-07-14 03:17:29.647608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.425 [2024-07-14 03:17:29.647632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.425 [2024-07-14 03:17:29.647648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.425 [2024-07-14 03:17:29.650106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.425 [2024-07-14 03:17:29.659332] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.425 [2024-07-14 03:17:29.659652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.659830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.425 [2024-07-14 03:17:29.659860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.425 [2024-07-14 03:17:29.659900] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.426 [2024-07-14 03:17:29.660048] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.426 [2024-07-14 03:17:29.660164] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.426 [2024-07-14 03:17:29.660188] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.426 [2024-07-14 03:17:29.660203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.426 [2024-07-14 03:17:29.662540] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.426 [2024-07-14 03:17:29.671935] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.426 [2024-07-14 03:17:29.672295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.426 [2024-07-14 03:17:29.672528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.426 [2024-07-14 03:17:29.672574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.426 [2024-07-14 03:17:29.672591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.426 [2024-07-14 03:17:29.672757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.426 [2024-07-14 03:17:29.672959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.426 [2024-07-14 03:17:29.672985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.426 [2024-07-14 03:17:29.673001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.426 [2024-07-14 03:17:29.675271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.684 [2024-07-14 03:17:29.684540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.684 [2024-07-14 03:17:29.684953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.685174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.685222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.684 [2024-07-14 03:17:29.685240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.684 [2024-07-14 03:17:29.685424] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.684 [2024-07-14 03:17:29.685558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.684 [2024-07-14 03:17:29.685581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.684 [2024-07-14 03:17:29.685597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.684 [2024-07-14 03:17:29.687907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.684 [2024-07-14 03:17:29.697251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.684 [2024-07-14 03:17:29.697590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.697795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.697824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.684 [2024-07-14 03:17:29.697841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.684 [2024-07-14 03:17:29.697982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.684 [2024-07-14 03:17:29.698188] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.684 [2024-07-14 03:17:29.698212] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.684 [2024-07-14 03:17:29.698228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.684 [2024-07-14 03:17:29.700635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.684 [2024-07-14 03:17:29.709959] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.684 [2024-07-14 03:17:29.710347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.710580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.710605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.684 [2024-07-14 03:17:29.710621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.684 [2024-07-14 03:17:29.710796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.684 [2024-07-14 03:17:29.711010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.684 [2024-07-14 03:17:29.711035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.684 [2024-07-14 03:17:29.711050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.684 [2024-07-14 03:17:29.713564] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.684 [2024-07-14 03:17:29.722624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.684 [2024-07-14 03:17:29.722950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.723127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.684 [2024-07-14 03:17:29.723155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.684 [2024-07-14 03:17:29.723172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.684 [2024-07-14 03:17:29.723338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.684 [2024-07-14 03:17:29.723489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.684 [2024-07-14 03:17:29.723513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.684 [2024-07-14 03:17:29.723529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.684 [2024-07-14 03:17:29.725687] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.684 [2024-07-14 03:17:29.735051] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.735392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.735695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.735720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.735750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.735919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.736055] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.736079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.736094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.738410] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.747711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.748093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.748296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.748321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.748337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.748553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.748740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.748764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.748779] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.751069] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.760256] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.760618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.760830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.760858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.760886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.761052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.761242] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.761266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.761282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.763572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.772650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.773098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.773334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.773378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.773395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.773565] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.773780] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.773804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.773820] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.776270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.785246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.785684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.785905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.785935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.785952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.786153] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.786341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.786365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.786381] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.788538] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.797999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.798384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.798664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.798710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.798728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.798892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.799098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.799122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.799138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.801563] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.810634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.811039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.811238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.811267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.811293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.811423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.811593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.811617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.811632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.813959] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.823220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.823640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.823874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.823907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.823922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.824155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.824325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.824349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.824364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.826503] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.835754] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.836152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.836499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.836562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.836580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.836763] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.836962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.685 [2024-07-14 03:17:29.836987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.685 [2024-07-14 03:17:29.837002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.685 [2024-07-14 03:17:29.839342] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.685 [2024-07-14 03:17:29.848185] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.685 [2024-07-14 03:17:29.848550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.848786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.685 [2024-07-14 03:17:29.848833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.685 [2024-07-14 03:17:29.848851] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.685 [2024-07-14 03:17:29.849043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.685 [2024-07-14 03:17:29.849194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.849218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.849234] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.851588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.860825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.861195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.861481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.861527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.861545] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.861746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.861891] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.861916] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.861931] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.864227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.873274] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.873821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.874062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.874091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.874109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.874256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.874426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.874450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.874465] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.876695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.885732] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.886216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.886435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.886463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.886481] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.886646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.886803] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.886827] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.886842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.889384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.898328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.898715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.898943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.898973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.898991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.899156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.899327] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.899351] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.899367] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.901701] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.910806] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.911258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.911680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.911735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.911753] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.911911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.912082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.912106] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.912122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.914207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.923350] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.923710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.923942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.686 [2024-07-14 03:17:29.923972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.686 [2024-07-14 03:17:29.923989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.686 [2024-07-14 03:17:29.924173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.686 [2024-07-14 03:17:29.924360] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.686 [2024-07-14 03:17:29.924389] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.686 [2024-07-14 03:17:29.924406] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.686 [2024-07-14 03:17:29.926879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.686 [2024-07-14 03:17:29.936034] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.686 [2024-07-14 03:17:29.936480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.936676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.936717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.936736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.936931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.937102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.937126] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.937142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:29.939462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:29.948633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:29.949005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.949252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.949301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.949319] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.949484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.949654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.949677] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.949693] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:29.952144] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:29.961304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:29.961673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.961844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.961883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.961910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.962076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.962246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.962270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.962295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:29.964524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:29.973717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:29.974089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.974319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.974345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.974360] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.974543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.974694] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.974718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.974734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:29.977186] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:29.986196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:29.986590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.986802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.986827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.986842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.987068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.987203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.987227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.987243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:29.989559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:29.998750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:29.999196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.999518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:29.999566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:29.999583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:29.999731] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:29.999895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:29.999920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:29.999936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.002316] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.011463] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.011843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.012106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.012140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.012162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.012351] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.012604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.012631] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.012651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.015165] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.024044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.024603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.024860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.024899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.024919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.025085] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.025220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.025244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.025260] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.027534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.036598] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.037023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.037223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.037251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.037269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.037434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.037657] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.037682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.037698] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.040030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.048966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.049337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.049698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.049747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.049765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.049886] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.050103] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.050127] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.050142] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.052443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.061612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.061978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.062178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.062206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.062224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.062389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.062487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.062510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.062526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.064703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.073980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.074465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.074729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.074757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.947 [2024-07-14 03:17:30.074774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.947 [2024-07-14 03:17:30.074969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.947 [2024-07-14 03:17:30.075158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.947 [2024-07-14 03:17:30.075182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.947 [2024-07-14 03:17:30.075198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.947 [2024-07-14 03:17:30.077521] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.947 [2024-07-14 03:17:30.086384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.947 [2024-07-14 03:17:30.086950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.947 [2024-07-14 03:17:30.087172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.087197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.087213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.087409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.087597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.087622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.087638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.089950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.098937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.099455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.099833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.099961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.099981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.100183] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.100388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.100412] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.100428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.102624] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.111638] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.111951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.112165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.112191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.112206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.112409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.112597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.112621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.112637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.114933] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.124471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.124930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.125112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.125146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.125164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.125329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.125534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.125558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.125574] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.127883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.136877] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.137311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.137648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.137701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.137719] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.137952] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.138140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.138164] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.138180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.140669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.149403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.149758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.149938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.149968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.149985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.150151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.150303] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.150327] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.150343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.152699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.161997] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.162382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.162737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.162786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.162809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.163004] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.163156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.163181] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.163196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.165337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.174568] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.175010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.175216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.175244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.175262] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.175426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.175632] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.175656] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.175672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.177983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:34.948 [2024-07-14 03:17:30.187040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:34.948 [2024-07-14 03:17:30.187383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.187690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:34.948 [2024-07-14 03:17:30.187741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:34.948 [2024-07-14 03:17:30.187758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:34.948 [2024-07-14 03:17:30.187918] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:34.948 [2024-07-14 03:17:30.188070] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:34.948 [2024-07-14 03:17:30.188094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:34.948 [2024-07-14 03:17:30.188110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:34.948 [2024-07-14 03:17:30.190464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.199941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.200351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.200540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.200568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.200586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.200777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.200959] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.200984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.201001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.203390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.212314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.212776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.212999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.213028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.213046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.213265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.213488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.213513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.213529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.215743] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.224924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.225247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.225459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.225487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.225504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.225615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.225785] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.225809] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.225824] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.228279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.237459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.237945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.238214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.238265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.238282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.238429] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.238604] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.238629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.238644] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.240719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.250007] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.250409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.250763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.250813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.250831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.251042] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.251212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.251236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.251252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.253555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.262738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.263080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.263281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.263309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.263326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.263455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.263660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.263684] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.263700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.265962] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.275356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.275927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.276134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.276162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.276179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.276344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.276479] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.276508] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.276524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.278808] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.287938] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.288361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.288584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.288612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.288629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.288758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.288960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.288985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.289001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.291230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.208 [2024-07-14 03:17:30.300445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.208 [2024-07-14 03:17:30.300831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.301048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.208 [2024-07-14 03:17:30.301077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.208 [2024-07-14 03:17:30.301095] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.208 [2024-07-14 03:17:30.301260] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.208 [2024-07-14 03:17:30.301429] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.208 [2024-07-14 03:17:30.301454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.208 [2024-07-14 03:17:30.301469] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.208 [2024-07-14 03:17:30.303790] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.312968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.313350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.313627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.313653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.313668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.313888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.314059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.314082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.314104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.316406] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.325472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.325818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.326005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.326035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.326052] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.326199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.326405] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.326428] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.326444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.328880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.338008] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.338390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.338694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.338750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.338767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.338896] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.338994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.339018] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.339034] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.341335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.350605] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.350948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.351151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.351177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.351192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.351370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.351523] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.351546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.351562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.353876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.363076] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.363422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.363664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.363693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.363710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.363905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.364058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.364082] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.364098] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.366435] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.375672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.376015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.376336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.376397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.376414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.376578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.376730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.376754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.376769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.379116] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.388084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.388420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.388637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.388664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.388680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.388863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.389092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.389116] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.389132] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.391614] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.400750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.401152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.401353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.401379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.401395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.401537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.401672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.401696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.401711] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.403831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.413228] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.413584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.413787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.413815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.413833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.414043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.414195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.414219] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.414235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.416572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.425817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.426169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.426361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.426389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.426407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.426590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.426778] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.426802] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.426818] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.429273] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.438331] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.438716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.438946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.438976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.438993] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.439123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.439346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.439370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.439386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.441955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.209 [2024-07-14 03:17:30.451049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.209 [2024-07-14 03:17:30.451505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.451684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.209 [2024-07-14 03:17:30.451711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.209 [2024-07-14 03:17:30.451728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.209 [2024-07-14 03:17:30.451903] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.209 [2024-07-14 03:17:30.452056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.209 [2024-07-14 03:17:30.452079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.209 [2024-07-14 03:17:30.452095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.209 [2024-07-14 03:17:30.454271] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.468 [2024-07-14 03:17:30.463587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.463999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.464178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.464213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.464233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.464417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.464641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.464665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.464681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.466956] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.476191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.476715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.476972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.477007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.477026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.477246] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.477434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.477458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.477474] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.479828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.488822] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.489212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.489452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.489493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.489511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.489641] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.489829] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.489853] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.489879] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.492227] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.501328] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.501731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.501989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.502046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.502064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.502212] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.502363] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.502387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.502402] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.504965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.513939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.514308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.514510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.514539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.514561] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.514728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.514926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.514950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.514966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.517504] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.526412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.526823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.526998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.527024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.527040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.527190] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.527396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.527419] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.527435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.529911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.538917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.539324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.539496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.539526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.539544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.539746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.539934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.539960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.539976] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.542384] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.551396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.551760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.551934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.551961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.551976] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.552179] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.552375] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.552399] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.552415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.554626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.563927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.564525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.564791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.564816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.564832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.565032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.565256] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.565281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.565297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.567707] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.576398] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.576782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.576994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.577021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.577037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.577210] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.577434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.577458] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.577475] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.579949] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.589184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.589605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.589816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.589844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.589861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.590055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.590251] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.590275] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.590290] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.592626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.601611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.601986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.602376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.602433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.602450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.602616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.602731] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.602755] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.602770] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.604924] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.614279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.614656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.614858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.614893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.614920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.469 [2024-07-14 03:17:30.615121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.469 [2024-07-14 03:17:30.615290] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.469 [2024-07-14 03:17:30.615314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.469 [2024-07-14 03:17:30.615330] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.469 [2024-07-14 03:17:30.617593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.469 [2024-07-14 03:17:30.627138] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.469 [2024-07-14 03:17:30.627493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.627721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.469 [2024-07-14 03:17:30.627749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.469 [2024-07-14 03:17:30.627766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.627973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.628111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.628140] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.628170] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.630558] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.639713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.640104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.640341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.640388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.640406] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.640572] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.640741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.640765] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.640780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.643111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.652481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.652807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.652990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.653017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.653034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.653215] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.653423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.653447] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.653463] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.655843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.665087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.665448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.665637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.665663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.665678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.665838] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.666018] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.666043] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.666064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.668331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.677709] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.678074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.678356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.678382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.678413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.678588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.678775] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.678799] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.678815] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.681329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.690367] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.690825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.691040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.691066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.691082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.691239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.691444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.691468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.691483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.693635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.703015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.703372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.703746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.703807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.703824] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.703998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.704136] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.704171] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.704183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.706490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.470 [2024-07-14 03:17:30.715645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.470 [2024-07-14 03:17:30.716047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.716244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.470 [2024-07-14 03:17:30.716268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.470 [2024-07-14 03:17:30.716284] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.470 [2024-07-14 03:17:30.716496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.470 [2024-07-14 03:17:30.716684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.470 [2024-07-14 03:17:30.716708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.470 [2024-07-14 03:17:30.716724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.470 [2024-07-14 03:17:30.719201] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.731 [2024-07-14 03:17:30.728176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.731 [2024-07-14 03:17:30.728547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.731 [2024-07-14 03:17:30.728746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.731 [2024-07-14 03:17:30.728795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.731 [2024-07-14 03:17:30.728813] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.731 [2024-07-14 03:17:30.728999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.731 [2024-07-14 03:17:30.729170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.731 [2024-07-14 03:17:30.729195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.731 [2024-07-14 03:17:30.729211] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.731 [2024-07-14 03:17:30.731458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.731 [2024-07-14 03:17:30.740721] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.731 [2024-07-14 03:17:30.741045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.731 [2024-07-14 03:17:30.741262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.731 [2024-07-14 03:17:30.741287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.731 [2024-07-14 03:17:30.741303] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.731 [2024-07-14 03:17:30.741508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.731 [2024-07-14 03:17:30.741664] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.741688] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.741703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.744055] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.753698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.754019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.754284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.754329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.754346] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.754529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.754663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.754687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.754703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.757016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.766323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.766715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.766949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.766976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.766992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.767157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.767351] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.767376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.767392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.769884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.779088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.779471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.779705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.779751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.779768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.779943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.780113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.780137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.780153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.782453] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.791824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.792220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.792394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.792422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.792439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.792568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.792755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.792779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.792795] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.795070] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.804507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.804921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.805079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.805104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.805120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.805294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.805448] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.805472] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.805488] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.807849] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.817335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.817688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.817886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.817914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.817932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.818097] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.818267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.818290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.818306] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.820661] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.829899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.830210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.830454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.830504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.830522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.830670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.830856] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.830892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.830909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.833570] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.842306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.842782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.843024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.843052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.843069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.843216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.843368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.843392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.843408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.845853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.854855] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.855297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.855498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.855524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.855540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.855702] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.855885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.855909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.855925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.858388] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.867253] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.867647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.867910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.867939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.867962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.868146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.868315] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.868340] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.868355] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.870710] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.879771] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.880116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.880514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.880570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.880587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.880734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.880950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.880975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.880990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.883380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.892391] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.892896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.893091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.893119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.893136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.893301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.893471] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.893495] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.893511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.895834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.905050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.905400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.905759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.905808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.905825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.905988] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.906195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.906220] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.906235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.908644] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.917768] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.732 [2024-07-14 03:17:30.918121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.918387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.732 [2024-07-14 03:17:30.918412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.732 [2024-07-14 03:17:30.918427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.732 [2024-07-14 03:17:30.918602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.732 [2024-07-14 03:17:30.918790] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.732 [2024-07-14 03:17:30.918814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.732 [2024-07-14 03:17:30.918830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.732 [2024-07-14 03:17:30.921213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.732 [2024-07-14 03:17:30.930446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.733 [2024-07-14 03:17:30.930941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.931200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.931245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.733 [2024-07-14 03:17:30.931263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.733 [2024-07-14 03:17:30.931410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.733 [2024-07-14 03:17:30.931543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.733 [2024-07-14 03:17:30.931567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.733 [2024-07-14 03:17:30.931583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.733 [2024-07-14 03:17:30.934024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.733 [2024-07-14 03:17:30.942977] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.733 [2024-07-14 03:17:30.943399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.943638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.943687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.733 [2024-07-14 03:17:30.943705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.733 [2024-07-14 03:17:30.943900] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.733 [2024-07-14 03:17:30.944059] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.733 [2024-07-14 03:17:30.944083] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.733 [2024-07-14 03:17:30.944099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.733 [2024-07-14 03:17:30.946327] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.733 [2024-07-14 03:17:30.955447] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.733 [2024-07-14 03:17:30.955830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.956019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.956049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.733 [2024-07-14 03:17:30.956066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.733 [2024-07-14 03:17:30.956250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.733 [2024-07-14 03:17:30.956419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.733 [2024-07-14 03:17:30.956444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.733 [2024-07-14 03:17:30.956459] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.733 [2024-07-14 03:17:30.958776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.733 [2024-07-14 03:17:30.968130] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.733 [2024-07-14 03:17:30.968478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.968671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.968717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.733 [2024-07-14 03:17:30.968735] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.733 [2024-07-14 03:17:30.968864] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.733 [2024-07-14 03:17:30.969064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.733 [2024-07-14 03:17:30.969088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.733 [2024-07-14 03:17:30.969103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.733 [2024-07-14 03:17:30.971458] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.733 [2024-07-14 03:17:30.980704] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.733 [2024-07-14 03:17:30.981121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.981352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.733 [2024-07-14 03:17:30.981401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.733 [2024-07-14 03:17:30.981419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.733 [2024-07-14 03:17:30.981616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.733 [2024-07-14 03:17:30.981806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.733 [2024-07-14 03:17:30.981836] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.733 [2024-07-14 03:17:30.981853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:30.984152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:30.993165] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:30.993615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:30.993817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:30.993845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:30.993863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:30.994027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:30.994233] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:30.994257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:30.994273] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:30.996611] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.005779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.006186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.006551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.006598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.006616] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.006745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.006926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.006951] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.006967] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.009377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.018498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.018914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.019140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.019168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.019186] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.019333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.019521] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.019544] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.019566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.021901] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.031209] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.031677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.031943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.031973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.031990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.032138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.032253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.032276] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.032292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.034394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.043730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.044119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.044379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.044425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.044443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.044626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.044795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.044819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.044834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.046831] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.056245] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.056780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.057013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.057043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.057061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.057263] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.057451] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.057476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.057492] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.059801] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.068796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.069198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.069455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.069484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.069502] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.069703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.069922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.069948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.069965] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.072176] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.081239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.081627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.081794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.081823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.081841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.081966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.082120] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.082144] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.082161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.084459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.093824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.094190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.094422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.094452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.094470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.094617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.094752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.094776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.094792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.097140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.106502] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.106880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.107094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.107123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.107141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.107325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.107514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.107538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.107554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.109858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.119033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.119375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.119644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.119691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.119709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.119906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.120042] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.120066] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.120083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.122218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.131560] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.131914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.132116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.132141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.132174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.132322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.132493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.132518] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.132534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.134712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.144261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.993 [2024-07-14 03:17:31.144712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.144984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.993 [2024-07-14 03:17:31.145014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.993 [2024-07-14 03:17:31.145032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.993 [2024-07-14 03:17:31.145163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.993 [2024-07-14 03:17:31.145353] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.993 [2024-07-14 03:17:31.145378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.993 [2024-07-14 03:17:31.145394] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.993 [2024-07-14 03:17:31.147588] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.993 [2024-07-14 03:17:31.157097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.157619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.157840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.157878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.157899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.158065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.158235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.158260] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.158276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.160647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.169642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.170004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.170274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.170326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.170344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.170493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.170682] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.170707] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.170723] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.172938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.182472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.182873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.183037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.183071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.183090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.183257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.183409] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.183434] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.183450] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.185735] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.195060] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.195433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.195648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.195674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.195690] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.195847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.195994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.196020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.196037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.198209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.207665] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.208031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.208250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.208276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.208292] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.208447] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.208631] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.208655] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.208671] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.210913] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.220472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.220924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.221166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.221200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.221238] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.221370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.221522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.221547] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.221563] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.223909] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:35.994 [2024-07-14 03:17:31.233093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:35.994 [2024-07-14 03:17:31.233414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.233639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:35.994 [2024-07-14 03:17:31.233668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:35.994 [2024-07-14 03:17:31.233686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:35.994 [2024-07-14 03:17:31.233879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:35.994 [2024-07-14 03:17:31.234050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:35.994 [2024-07-14 03:17:31.234074] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:35.994 [2024-07-14 03:17:31.234090] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:35.994 [2024-07-14 03:17:31.236514] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.245837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.246265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.246522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.246551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.246569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.246754] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.246954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.246980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.246996] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.249502] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.258348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.258923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.259311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.259379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.259397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.259551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.259668] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.259692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.259708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.262016] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.270886] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.271236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.271445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.271532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.271551] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.271698] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.271899] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.271924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.271941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.274152] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.283497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.283876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.284072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.284101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.284119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.284285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.284438] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.284463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.284479] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.286816] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.295849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.296204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.296491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.296538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.296557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.296687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.296863] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.296899] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.296916] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.299196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.308459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.308824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.309076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.309106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.309124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.309291] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.309424] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.309449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.309466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.311970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.321072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.303 [2024-07-14 03:17:31.321457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.321784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.303 [2024-07-14 03:17:31.321853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.303 [2024-07-14 03:17:31.321883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.303 [2024-07-14 03:17:31.322015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.303 [2024-07-14 03:17:31.322203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.303 [2024-07-14 03:17:31.322228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.303 [2024-07-14 03:17:31.322244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.303 [2024-07-14 03:17:31.324669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.303 [2024-07-14 03:17:31.333770] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.334188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.334425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.334465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.334482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.334664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.334816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.334847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.334875] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.337270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.346462] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.346784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.347015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.347045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.347064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.347229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.347381] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.347406] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.347422] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.349836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.358985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.359409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.359624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.359653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.359671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.359837] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.360000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.360026] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.360042] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.362448] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.371363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.371772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.371975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.372006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.372024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.372172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.372361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.372386] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.372409] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.374834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.383786] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.384204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.384500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.384550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.384568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.384734] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.384952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.384977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.384994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.387382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.396465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.396808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.397042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.397072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.397090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.397239] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.397426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.397451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.397468] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.399769] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.409131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.409479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.409744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.409797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.409815] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.410011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.304 [2024-07-14 03:17:31.410182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.304 [2024-07-14 03:17:31.410206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.304 [2024-07-14 03:17:31.410223] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.304 [2024-07-14 03:17:31.412581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.304 [2024-07-14 03:17:31.421529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.304 [2024-07-14 03:17:31.421933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.422230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.304 [2024-07-14 03:17:31.422280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.304 [2024-07-14 03:17:31.422298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.304 [2024-07-14 03:17:31.422464] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.422617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.422642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.422658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.425182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 [2024-07-14 03:17:31.434198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.434568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.434814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.434839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.434880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.435043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.435241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.435266] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.435282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.437310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 [2024-07-14 03:17:31.446849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.447249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.447535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.447585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.447603] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.447733] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.447950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.447975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.447991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.450256] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 [2024-07-14 03:17:31.459527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.459987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.460199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.460224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.460240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.460395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.460557] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.460582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.460599] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.463051] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 [2024-07-14 03:17:31.472289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.472704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.472882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.472911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.472929] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.473095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.473283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.473308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.473324] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.475572] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 [2024-07-14 03:17:31.484908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.485314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.485517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.485546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.485564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.485765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.485948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.485973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.485990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.488380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2127511 Killed "${NVMF_APP[@]}" "$@" 00:29:36.305 03:17:31 -- host/bdevperf.sh@36 -- # tgt_init 00:29:36.305 03:17:31 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:36.305 03:17:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:36.305 03:17:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:36.305 03:17:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.305 [2024-07-14 03:17:31.497559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.305 [2024-07-14 03:17:31.497925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.498133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.305 [2024-07-14 03:17:31.498174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.305 [2024-07-14 03:17:31.498190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.305 [2024-07-14 03:17:31.498319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.305 [2024-07-14 03:17:31.498491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.305 [2024-07-14 03:17:31.498515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.305 [2024-07-14 03:17:31.498532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.305 [2024-07-14 03:17:31.500954] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.305 03:17:31 -- nvmf/common.sh@469 -- # nvmfpid=2128500 00:29:36.305 03:17:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:36.305 03:17:31 -- nvmf/common.sh@470 -- # waitforlisten 2128500 00:29:36.305 03:17:31 -- common/autotest_common.sh@819 -- # '[' -z 2128500 ']' 00:29:36.305 03:17:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.305 03:17:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:36.305 03:17:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.306 03:17:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:36.306 03:17:31 -- common/autotest_common.sh@10 -- # set +x 00:29:36.565 [2024-07-14 03:17:31.510093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.510537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.510716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.510745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.510764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.510978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.511132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.511156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.511174] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.513547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.522435] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.522779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.522958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.522992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.523011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.523149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.523258] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.523281] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.523296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.525421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.534797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.535174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.535359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.535386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.535403] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.535615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.535729] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.535750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.535765] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.537958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.543316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:36.565 [2024-07-14 03:17:31.543372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.565 [2024-07-14 03:17:31.547121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.547518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.547703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.547728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.547745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.547936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.548125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.548158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.548185] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.550213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.559371] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.559729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.559952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.559979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.559996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.560113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.560288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.560308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.560322] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.562637] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.571511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.571947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.572153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.572179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.572206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.572394] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.572547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.572567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.572580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.574363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.565 [2024-07-14 03:17:31.584025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.584379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.584552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.584580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.584599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.584710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.584862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.584895] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.584912] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.587277] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.596436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.596773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.597024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.597051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.597068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.565 [2024-07-14 03:17:31.597224] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.565 [2024-07-14 03:17:31.597413] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.565 [2024-07-14 03:17:31.597437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.565 [2024-07-14 03:17:31.597464] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.565 [2024-07-14 03:17:31.599882] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.565 [2024-07-14 03:17:31.608969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.565 [2024-07-14 03:17:31.609413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.609634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.565 [2024-07-14 03:17:31.609679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.565 [2024-07-14 03:17:31.609697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.609863] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.609998] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.610021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.610036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.612426] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.615347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:36.566 [2024-07-14 03:17:31.621569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.622016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.622212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.622238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.622256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.622412] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.622569] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.622592] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.622609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.624667] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.633908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.634463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.634684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.634719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.634739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.634954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.635092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.635114] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.635131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.637545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.646644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.647065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.647245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.647271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.647287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.647422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.647618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.647644] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.647661] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.649828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.659210] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.659608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.659826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.659854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.659881] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.660065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.660246] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.660271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.660289] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.662678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.671952] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.672466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.672712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.672755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.672788] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.673036] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.673271] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.673298] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.673316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.675778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.684472] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.684924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.685131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.685174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.685192] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.685409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.685544] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.685569] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.685587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.688060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.697201] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.697595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.697832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.697860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.697915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.698094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.698311] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.698337] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.698354] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.700609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.707884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:36.566 [2024-07-14 03:17:31.708026] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.566 [2024-07-14 03:17:31.708046] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.566 [2024-07-14 03:17:31.708061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.566 [2024-07-14 03:17:31.708146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:36.566 [2024-07-14 03:17:31.708248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:36.566 [2024-07-14 03:17:31.708251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.566 [2024-07-14 03:17:31.709802] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.710180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.710384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.710411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.710428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.710609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.566 [2024-07-14 03:17:31.710756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.566 [2024-07-14 03:17:31.710778] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.566 [2024-07-14 03:17:31.710794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.566 [2024-07-14 03:17:31.712911] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.566 [2024-07-14 03:17:31.722150] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.566 [2024-07-14 03:17:31.722758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.723090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.566 [2024-07-14 03:17:31.723119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.566 [2024-07-14 03:17:31.723139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.566 [2024-07-14 03:17:31.723354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.723488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.723510] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.723527] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.725587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.734513] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.735071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.735283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.735310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.735329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.735488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.735652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.735674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.735690] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.737920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.746829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.747409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.747618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.747645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.747664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.747820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.747996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.748020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.748038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.750198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.759077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.759612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.759818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.759856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.759884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.760090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.760273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.760294] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.760310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.762467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.771299] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.771811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.772018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.772047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.772066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.772261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.772465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.772487] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.772503] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.774732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.783478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.783992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.784197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.784224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.784254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.784396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.784547] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.784568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.784584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.786647] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.795601] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.795984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.796175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.796201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.796218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.796374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.796503] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.796525] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.796538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.798649] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.567 [2024-07-14 03:17:31.808020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.567 [2024-07-14 03:17:31.808389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.808597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.567 [2024-07-14 03:17:31.808623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.567 [2024-07-14 03:17:31.808639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.567 [2024-07-14 03:17:31.808821] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.567 [2024-07-14 03:17:31.808962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.567 [2024-07-14 03:17:31.808985] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.567 [2024-07-14 03:17:31.808999] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.567 [2024-07-14 03:17:31.811209] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.820231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.820639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.820834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.820862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.820888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.821056] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.821250] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.821271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.821286] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.823464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.832519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.832908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.833065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.833091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.833108] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.833296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.833410] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.833430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.833445] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.835445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.844672] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.845029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.845215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.845241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.845258] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.845375] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.845527] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.845548] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.845562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.847702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.856833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.857220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.857441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.857467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.857489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.857639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.857799] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.857820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.857834] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.859778] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.869195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.869591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.869794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.869820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.869837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.869978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.870129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.870151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.870180] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.872032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.881589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.881920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.882095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.882121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.882138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.882328] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.882473] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.882494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.882508] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.884469] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.893758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.894130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.894283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.894309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.894330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.894495] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.894656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.894678] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.894692] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.896685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.906118] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.906539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.906735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.906762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.828 [2024-07-14 03:17:31.906779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.828 [2024-07-14 03:17:31.906937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.828 [2024-07-14 03:17:31.907087] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.828 [2024-07-14 03:17:31.907110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.828 [2024-07-14 03:17:31.907124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.828 [2024-07-14 03:17:31.909206] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.828 [2024-07-14 03:17:31.918358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.828 [2024-07-14 03:17:31.918738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.918955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.828 [2024-07-14 03:17:31.918982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.918999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.919211] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.919372] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.919392] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.919407] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.921363] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.930678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.930991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.931169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.931196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.931212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.931398] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.931559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.931581] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.931595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.933678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.942968] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.943285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.943460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.943487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.943504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.943685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.943860] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.943905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.943921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.945968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.955046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.955430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.955612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.955639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.955655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.955758] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.955932] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.955954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.955968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.957975] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.967385] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.967738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.967927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.967956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.967973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.968075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.968248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.968270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.968283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.970243] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.979586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.979944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.980154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.980181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.980198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.980380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.980541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.980563] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.980577] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.982695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:31.991807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:31.992147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.992350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:31.992377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:31.992393] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:31.992526] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:31.992705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:31.992727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:31.992741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:31.994749] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:32.004096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:32.004541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.004724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.004751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:32.004768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:32.004910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:32.005126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:32.005155] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:32.005186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:32.007142] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:32.016449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:32.016773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.016970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.016998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:32.017014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:32.017165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:32.017309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:32.017329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:32.017343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:32.019353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:32.028589] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:32.028993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.029192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.029221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:32.029237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.829 [2024-07-14 03:17:32.029387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.829 [2024-07-14 03:17:32.029549] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.829 [2024-07-14 03:17:32.029571] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.829 [2024-07-14 03:17:32.029585] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.829 [2024-07-14 03:17:32.031650] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.829 [2024-07-14 03:17:32.041033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.829 [2024-07-14 03:17:32.041387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.041596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.829 [2024-07-14 03:17:32.041623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.829 [2024-07-14 03:17:32.041640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.830 [2024-07-14 03:17:32.041790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.830 [2024-07-14 03:17:32.042000] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.830 [2024-07-14 03:17:32.042024] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.830 [2024-07-14 03:17:32.042043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.830 [2024-07-14 03:17:32.043997] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.830 [2024-07-14 03:17:32.053375] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.830 [2024-07-14 03:17:32.053742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.053952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.053980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.830 [2024-07-14 03:17:32.053997] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.830 [2024-07-14 03:17:32.054115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.830 [2024-07-14 03:17:32.054340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.830 [2024-07-14 03:17:32.054363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.830 [2024-07-14 03:17:32.054377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.830 [2024-07-14 03:17:32.056390] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.830 [2024-07-14 03:17:32.065634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.830 [2024-07-14 03:17:32.066015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.066171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.066200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.830 [2024-07-14 03:17:32.066217] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.830 [2024-07-14 03:17:32.066415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.830 [2024-07-14 03:17:32.066575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.830 [2024-07-14 03:17:32.066597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.830 [2024-07-14 03:17:32.066611] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.830 [2024-07-14 03:17:32.068574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:36.830 [2024-07-14 03:17:32.077939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:36.830 [2024-07-14 03:17:32.078246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.078395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:36.830 [2024-07-14 03:17:32.078422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:36.830 [2024-07-14 03:17:32.078439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:36.830 [2024-07-14 03:17:32.078617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:36.830 [2024-07-14 03:17:32.078776] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:36.830 [2024-07-14 03:17:32.078798] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:36.830 [2024-07-14 03:17:32.078816] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.089 [2024-07-14 03:17:32.081087] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.089 [2024-07-14 03:17:32.090160] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.089 [2024-07-14 03:17:32.090488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.090669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.090696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.089 [2024-07-14 03:17:32.090713] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.089 [2024-07-14 03:17:32.090885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.089 [2024-07-14 03:17:32.091066] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.089 [2024-07-14 03:17:32.091089] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.089 [2024-07-14 03:17:32.091104] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.089 [2024-07-14 03:17:32.093104] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.089 [2024-07-14 03:17:32.102420] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.089 [2024-07-14 03:17:32.102852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.103016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.103043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.089 [2024-07-14 03:17:32.103060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.089 [2024-07-14 03:17:32.103162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.089 [2024-07-14 03:17:32.103355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.089 [2024-07-14 03:17:32.103378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.089 [2024-07-14 03:17:32.103392] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.089 [2024-07-14 03:17:32.105443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.089 [2024-07-14 03:17:32.114664] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.089 [2024-07-14 03:17:32.114996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.115197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.089 [2024-07-14 03:17:32.115224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.115240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.115390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.115596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.115619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.115633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.117780] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.126833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.127199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.127406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.127434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.127450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.127584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.127760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.127783] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.127797] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.129899] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.139320] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.139678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.139862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.139895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.139912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.140062] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.140273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.140296] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.140310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.142366] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.151730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.152095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.152283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.152311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.152328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.152493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.152636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.152658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.152672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.154609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.164085] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.164446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.164605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.164633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.164664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.164810] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.165016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.165040] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.165055] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.167129] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.176267] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.176709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.176888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.176916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.176932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.177084] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.177279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.177302] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.177316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.179353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.188470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.188857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.189025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.189053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.189070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.189221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.189368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.189390] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.189404] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.191456] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.200869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.201233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.201381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.201413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.201431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.201596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.201756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.201779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.201793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.203861] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.213231] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.213565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.213765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.213790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.213807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.213950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.214131] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.214168] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.214182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.216250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.225408] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.225779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.225956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.225985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.226001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.090 [2024-07-14 03:17:32.226150] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.090 [2024-07-14 03:17:32.226312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.090 [2024-07-14 03:17:32.226335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.090 [2024-07-14 03:17:32.226349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.090 [2024-07-14 03:17:32.228401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.090 [2024-07-14 03:17:32.237803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.090 [2024-07-14 03:17:32.238110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.238296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.090 [2024-07-14 03:17:32.238324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.090 [2024-07-14 03:17:32.238345] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.238479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.238641] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.238664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.238678] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.240654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.250250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.250556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.250735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.250763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.250779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.250970] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.251108] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.251132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.251162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.253249] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.262439] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.262800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.262974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.263002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.263018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.263214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.263358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.263380] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.263393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.265537] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.274762] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.275151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.275332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.275358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.275375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.275527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.275674] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.275696] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.275710] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.277753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.287117] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.287561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.287741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.287767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.287783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.287973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.288138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.288161] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.288189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.290203] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.299423] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.299756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.299964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.300000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.300017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.300214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.300345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.300366] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.300380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.302486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.311707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.312065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.312224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.312250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.312266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.312432] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.312581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.312603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.312617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.314746] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.324091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.324518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.324701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.324727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.324743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.324885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.325065] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.325087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.325101] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.327154] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.091 [2024-07-14 03:17:32.336629] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.091 [2024-07-14 03:17:32.337028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.337185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.091 [2024-07-14 03:17:32.337211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.091 [2024-07-14 03:17:32.337227] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.091 [2024-07-14 03:17:32.337407] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.091 [2024-07-14 03:17:32.337555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.091 [2024-07-14 03:17:32.337576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.091 [2024-07-14 03:17:32.337590] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.091 [2024-07-14 03:17:32.339841] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.351 [2024-07-14 03:17:32.349023] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.351 [2024-07-14 03:17:32.349490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.349637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.349663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.351 [2024-07-14 03:17:32.349680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.351 [2024-07-14 03:17:32.349814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.351 [2024-07-14 03:17:32.350037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.351 [2024-07-14 03:17:32.350068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.351 [2024-07-14 03:17:32.350083] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.351 [2024-07-14 03:17:32.352218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.351 [2024-07-14 03:17:32.361428] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.351 [2024-07-14 03:17:32.361775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.361971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.361998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.351 [2024-07-14 03:17:32.362014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.351 [2024-07-14 03:17:32.362133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.351 [2024-07-14 03:17:32.362295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.351 [2024-07-14 03:17:32.362316] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.351 [2024-07-14 03:17:32.362331] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.351 [2024-07-14 03:17:32.364338] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.351 [2024-07-14 03:17:32.373625] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.351 [2024-07-14 03:17:32.373991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.374210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.374236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.351 [2024-07-14 03:17:32.374253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.351 [2024-07-14 03:17:32.374370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.351 [2024-07-14 03:17:32.374534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.351 [2024-07-14 03:17:32.374555] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.351 [2024-07-14 03:17:32.374570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.351 [2024-07-14 03:17:32.376660] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.351 [2024-07-14 03:17:32.386099] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.351 [2024-07-14 03:17:32.386496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.386677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.386703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.351 [2024-07-14 03:17:32.386720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.351 [2024-07-14 03:17:32.386895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.351 [2024-07-14 03:17:32.387076] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.351 [2024-07-14 03:17:32.387098] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.351 [2024-07-14 03:17:32.387119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.351 [2024-07-14 03:17:32.389012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.351 [2024-07-14 03:17:32.398431] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.351 [2024-07-14 03:17:32.398843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.399032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.351 [2024-07-14 03:17:32.399059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.351 [2024-07-14 03:17:32.399076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.351 [2024-07-14 03:17:32.399256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.399417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.399439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.399453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.401413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.410744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.411087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.411265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.411292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.411308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.411474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.411652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.411673] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.411687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.413587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.423097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.423432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.423587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.423613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.423630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.423748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.423919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.423942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.423957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.426018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.435410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.435748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.435902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.435929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.435945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.436046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.436231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.436253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.436267] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.438213] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.447711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.448073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.448256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.448283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.448300] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.448401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.448596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.448619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.448633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.450654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.460125] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.460438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.460643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.460669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.460686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.460889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.461045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.461067] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.461082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.463296] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.472438] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.472738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.472919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.472947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.472963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.473113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.473293] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.473314] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.473328] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.475219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.484682] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.485063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.485210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.485236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.485253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.485430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.485605] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.485626] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.485641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.487725] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.497044] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.497463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.497641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.497668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.497684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.497835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.498010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.498033] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.498047] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.500208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.509140] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.509530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.509694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.509720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.509737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.509854] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.510045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.352 [2024-07-14 03:17:32.510067] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.352 [2024-07-14 03:17:32.510082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.352 [2024-07-14 03:17:32.512158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.352 [2024-07-14 03:17:32.521643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.352 [2024-07-14 03:17:32.522005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.522188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.352 [2024-07-14 03:17:32.522214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.352 [2024-07-14 03:17:32.522230] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.352 [2024-07-14 03:17:32.522380] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.352 [2024-07-14 03:17:32.522559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.522583] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.522597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.524843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 03:17:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:37.353 03:17:32 -- common/autotest_common.sh@852 -- # return 0 00:29:37.353 03:17:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:37.353 03:17:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:37.353 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.353 [2024-07-14 03:17:32.534080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 [2024-07-14 03:17:32.534419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.534621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.534647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.534664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.534889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.535084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.535107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.535122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.537027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 [2024-07-14 03:17:32.546590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 [2024-07-14 03:17:32.546941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.547105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.547132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.547149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.547313] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.547475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.547497] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.547511] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.549599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 03:17:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:37.353 03:17:32 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:37.353 03:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.353 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.353 [2024-07-14 03:17:32.556135] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:37.353 [2024-07-14 03:17:32.558796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 [2024-07-14 03:17:32.559153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.559355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.559380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.559397] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.559546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.559693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.559716] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.559730] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.561845] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 03:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.353 03:17:32 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:37.353 03:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.353 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.353 [2024-07-14 03:17:32.570882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 [2024-07-14 03:17:32.571255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.571437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.571463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.571479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.571642] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.571770] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.571795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.571809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.573784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 [2024-07-14 03:17:32.583251] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 [2024-07-14 03:17:32.583777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.583988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.584017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.584036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.584214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.584379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.584402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.584418] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.586524] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 Malloc0 00:29:37.353 [2024-07-14 03:17:32.595452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.353 03:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.353 03:17:32 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:37.353 [2024-07-14 03:17:32.595948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 03:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.353 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.353 [2024-07-14 03:17:32.596151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.353 [2024-07-14 03:17:32.596179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.353 [2024-07-14 03:17:32.596198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.353 [2024-07-14 03:17:32.596374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.353 [2024-07-14 03:17:32.596482] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.353 [2024-07-14 03:17:32.596506] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.353 [2024-07-14 03:17:32.596537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.353 [2024-07-14 03:17:32.598736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.353 03:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.353 03:17:32 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:37.353 03:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.353 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.613 [2024-07-14 03:17:32.607821] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.613 [2024-07-14 03:17:32.608249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.613 [2024-07-14 03:17:32.608463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.613 [2024-07-14 03:17:32.608489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10b9c20 with addr=10.0.0.2, port=4420 00:29:37.613 [2024-07-14 03:17:32.608514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b9c20 is same with the state(5) to be set 00:29:37.613 [2024-07-14 03:17:32.608620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10b9c20 (9): Bad file descriptor 00:29:37.613 [2024-07-14 03:17:32.608808] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:37.613 [2024-07-14 03:17:32.608848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:37.613 [2024-07-14 03:17:32.608903] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:37.613 [2024-07-14 03:17:32.610941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:37.613 03:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.613 03:17:32 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:37.613 03:17:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:37.613 03:17:32 -- common/autotest_common.sh@10 -- # set +x 00:29:37.613 [2024-07-14 03:17:32.615076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:37.613 03:17:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:37.613 03:17:32 -- host/bdevperf.sh@38 -- # wait 2127814 00:29:37.613 [2024-07-14 03:17:32.620365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:37.613 [2024-07-14 03:17:32.780932] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:45.736 00:29:45.736 Latency(us) 00:29:45.736 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.736 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:45.736 Verification LBA range: start 0x0 length 0x4000 00:29:45.736 Nvme1n1 : 15.01 8822.46 34.46 16182.86 0.00 5104.20 1165.08 22524.97 00:29:45.736 =================================================================================================================== 00:29:45.736 Total : 8822.46 34.46 16182.86 0.00 5104.20 1165.08 22524.97 00:29:45.994 03:17:41 -- host/bdevperf.sh@39 -- # sync 00:29:45.994 03:17:41 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.994 03:17:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:45.994 03:17:41 -- common/autotest_common.sh@10 -- # set +x 00:29:45.994 03:17:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:45.994 03:17:41 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:45.994 03:17:41 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:45.994 03:17:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:45.994 03:17:41 -- nvmf/common.sh@116 -- # sync 00:29:45.994 03:17:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:45.994 03:17:41 -- nvmf/common.sh@119 -- # set +e 00:29:45.994 03:17:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:45.994 03:17:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:45.994 rmmod nvme_tcp 00:29:45.994 rmmod nvme_fabrics 00:29:45.994 rmmod nvme_keyring 00:29:45.994 03:17:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:45.994 03:17:41 -- nvmf/common.sh@123 -- # set -e 00:29:45.994 03:17:41 -- nvmf/common.sh@124 -- # return 0 00:29:45.994 03:17:41 -- nvmf/common.sh@477 -- # '[' -n 2128500 ']' 00:29:45.994 03:17:41 -- nvmf/common.sh@478 -- # killprocess 2128500 00:29:45.994 03:17:41 -- common/autotest_common.sh@926 -- # '[' -z 2128500 ']' 00:29:45.994 03:17:41 -- common/autotest_common.sh@930 -- # kill -0 2128500 00:29:45.994 03:17:41 -- common/autotest_common.sh@931 -- # uname 00:29:45.994 03:17:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:45.994 03:17:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2128500 00:29:45.994 03:17:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:45.994 03:17:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:45.994 03:17:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2128500' 00:29:45.994 killing process with pid 2128500 00:29:45.994 03:17:41 -- common/autotest_common.sh@945 -- # kill 2128500 00:29:45.994 03:17:41 -- common/autotest_common.sh@950 -- # wait 2128500 00:29:46.252 03:17:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:46.252 03:17:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:46.252 03:17:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:46.252 03:17:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:46.252 03:17:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:46.252 03:17:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.252 03:17:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:46.252 03:17:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.808 03:17:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:48.808 00:29:48.808 real 0m22.808s 00:29:48.808 user 1m0.668s 00:29:48.808 sys 0m4.642s 00:29:48.808 03:17:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:48.808 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:29:48.808 ************************************ 00:29:48.808 END TEST nvmf_bdevperf 00:29:48.808 ************************************ 00:29:48.808 03:17:43 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:48.808 03:17:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:48.808 03:17:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:48.808 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:29:48.808 ************************************ 00:29:48.808 START TEST nvmf_target_disconnect 00:29:48.808 ************************************ 00:29:48.808 03:17:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:48.808 * Looking for test storage... 00:29:48.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.808 03:17:43 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.808 03:17:43 -- nvmf/common.sh@7 -- # uname -s 00:29:48.808 03:17:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.808 03:17:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.808 03:17:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.808 03:17:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.808 03:17:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.808 03:17:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.808 03:17:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.808 03:17:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.808 03:17:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.808 03:17:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.808 03:17:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.808 03:17:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:48.808 03:17:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.808 03:17:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.809 03:17:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.809 03:17:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.809 03:17:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.809 03:17:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.809 03:17:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.809 03:17:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.809 03:17:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.809 03:17:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.809 03:17:43 -- paths/export.sh@5 -- # export PATH 00:29:48.809 03:17:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.809 03:17:43 -- nvmf/common.sh@46 -- # : 0 00:29:48.809 03:17:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:48.809 03:17:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:48.809 03:17:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:48.809 03:17:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.809 03:17:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.809 03:17:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:48.809 03:17:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:48.809 03:17:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:48.809 03:17:43 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:48.809 03:17:43 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:48.809 03:17:43 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:48.809 03:17:43 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:48.809 03:17:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:48.809 03:17:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.809 03:17:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:48.809 03:17:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:48.809 03:17:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:48.809 03:17:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.809 03:17:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:48.809 03:17:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.809 03:17:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:48.809 03:17:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:48.809 03:17:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:48.809 03:17:43 -- common/autotest_common.sh@10 -- # set +x 00:29:50.717 03:17:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:50.717 03:17:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:50.717 03:17:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:50.717 03:17:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:50.717 03:17:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:50.717 03:17:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:50.717 03:17:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:50.717 03:17:45 -- nvmf/common.sh@294 -- # net_devs=() 00:29:50.717 03:17:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:50.717 03:17:45 -- nvmf/common.sh@295 -- # e810=() 00:29:50.717 03:17:45 -- nvmf/common.sh@295 -- # local -ga e810 00:29:50.717 03:17:45 -- nvmf/common.sh@296 -- # x722=() 00:29:50.717 03:17:45 -- nvmf/common.sh@296 -- # local -ga x722 00:29:50.717 03:17:45 -- nvmf/common.sh@297 -- # mlx=() 00:29:50.717 03:17:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:50.717 03:17:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:50.717 03:17:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:50.717 03:17:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:50.717 03:17:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:50.718 03:17:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:50.718 03:17:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:50.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:50.718 03:17:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:50.718 03:17:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:50.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:50.718 03:17:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:50.718 03:17:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.718 03:17:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.718 03:17:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:50.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:50.718 03:17:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:50.718 03:17:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:50.718 03:17:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:50.718 03:17:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:50.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:50.718 03:17:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:50.718 03:17:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:50.718 03:17:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:50.718 03:17:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:50.718 03:17:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:50.718 03:17:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:50.718 03:17:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:50.718 03:17:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:50.718 03:17:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:50.718 03:17:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:50.718 03:17:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:50.718 03:17:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:50.718 03:17:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:50.718 03:17:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:50.718 03:17:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:50.718 03:17:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:50.718 03:17:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:50.718 03:17:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:50.718 03:17:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:50.718 03:17:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:50.718 03:17:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:50.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:50.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:29:50.718 00:29:50.718 --- 10.0.0.2 ping statistics --- 00:29:50.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.718 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:50.718 03:17:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:50.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:50.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:29:50.718 00:29:50.718 --- 10.0.0.1 ping statistics --- 00:29:50.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:50.718 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:29:50.718 03:17:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:50.718 03:17:45 -- nvmf/common.sh@410 -- # return 0 00:29:50.718 03:17:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:50.718 03:17:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:50.718 03:17:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:50.718 03:17:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:50.718 03:17:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:50.718 03:17:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:50.718 03:17:45 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:50.718 03:17:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:50.718 03:17:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.718 03:17:45 -- common/autotest_common.sh@10 -- # set +x 00:29:50.718 ************************************ 00:29:50.718 START TEST nvmf_target_disconnect_tc1 00:29:50.718 ************************************ 00:29:50.718 03:17:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:50.718 03:17:45 -- host/target_disconnect.sh@32 -- # set +e 00:29:50.718 03:17:45 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.718 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.718 [2024-07-14 03:17:45.772145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-07-14 03:17:45.772440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-07-14 03:17:45.772472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13e8280 with addr=10.0.0.2, port=4420 00:29:50.718 [2024-07-14 03:17:45.772509] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:50.718 [2024-07-14 03:17:45.772536] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:50.718 [2024-07-14 03:17:45.772551] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:50.718 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:50.718 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:50.718 Initializing NVMe Controllers 00:29:50.718 03:17:45 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:50.718 03:17:45 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:50.718 03:17:45 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:50.718 03:17:45 -- common/autotest_common.sh@1132 -- # return 0 00:29:50.718 03:17:45 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:50.718 03:17:45 -- host/target_disconnect.sh@41 -- # set -e 00:29:50.718 00:29:50.718 real 0m0.094s 00:29:50.718 user 0m0.040s 00:29:50.718 sys 0m0.053s 00:29:50.718 03:17:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:50.718 03:17:45 -- common/autotest_common.sh@10 -- # set +x 00:29:50.718 ************************************ 00:29:50.718 END TEST nvmf_target_disconnect_tc1 00:29:50.718 ************************************ 00:29:50.718 03:17:45 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:50.718 03:17:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:50.718 03:17:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:50.718 03:17:45 -- common/autotest_common.sh@10 -- # set +x 00:29:50.718 ************************************ 00:29:50.718 START TEST nvmf_target_disconnect_tc2 00:29:50.718 ************************************ 00:29:50.718 03:17:45 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:50.718 03:17:45 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:50.718 03:17:45 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:50.718 03:17:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:50.718 03:17:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:50.718 03:17:45 -- common/autotest_common.sh@10 -- # set +x 00:29:50.718 03:17:45 -- nvmf/common.sh@469 -- # nvmfpid=2131699 00:29:50.718 03:17:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:50.718 03:17:45 -- nvmf/common.sh@470 -- # waitforlisten 2131699 00:29:50.718 03:17:45 -- common/autotest_common.sh@819 -- # '[' -z 2131699 ']' 00:29:50.718 03:17:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.718 03:17:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:50.718 03:17:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.718 03:17:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:50.718 03:17:45 -- common/autotest_common.sh@10 -- # set +x 00:29:50.718 [2024-07-14 03:17:45.861054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:50.718 [2024-07-14 03:17:45.861127] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.718 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.718 [2024-07-14 03:17:45.925262] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:50.978 [2024-07-14 03:17:46.005717] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:50.978 [2024-07-14 03:17:46.005893] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.978 [2024-07-14 03:17:46.005912] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.978 [2024-07-14 03:17:46.005924] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.978 [2024-07-14 03:17:46.006122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:50.978 [2024-07-14 03:17:46.006185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:50.978 [2024-07-14 03:17:46.006335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:50.978 [2024-07-14 03:17:46.006338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:51.545 03:17:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:51.545 03:17:46 -- common/autotest_common.sh@852 -- # return 0 00:29:51.545 03:17:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:51.545 03:17:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:51.545 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 03:17:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.804 03:17:46 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 Malloc0 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 [2024-07-14 03:17:46.832356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 [2024-07-14 03:17:46.860642] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:51.804 03:17:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.804 03:17:46 -- common/autotest_common.sh@10 -- # set +x 00:29:51.804 03:17:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.804 03:17:46 -- host/target_disconnect.sh@50 -- # reconnectpid=2131857 00:29:51.804 03:17:46 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:51.804 03:17:46 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:51.804 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.706 03:17:48 -- host/target_disconnect.sh@53 -- # kill -9 2131699 00:29:53.706 03:17:48 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Write completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 [2024-07-14 03:17:48.884786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.706 Read completed with error (sct=0, sc=8) 00:29:53.706 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 [2024-07-14 03:17:48.885136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Read completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 Write completed with error (sct=0, sc=8) 00:29:53.707 starting I/O failed 00:29:53.707 [2024-07-14 03:17:48.885467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.707 [2024-07-14 03:17:48.885886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.886330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.886745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.886994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.887146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.887350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.887378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.887609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.887797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.887825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.888044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.888224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.888249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.888396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.888605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.888666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.888840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.889240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.889617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.889817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.890042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.890196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.890225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.890417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.890627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.890681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.890884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.891262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.707 [2024-07-14 03:17:48.891716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.707 [2024-07-14 03:17:48.891994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.707 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.892181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.892359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.892384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.892569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.892746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.892770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.892964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.893330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.893775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.893993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.894182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.894393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.894423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.894605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.894854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.894890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.895068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.895283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.895308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.895460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.895620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.895646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.895848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.896202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.896705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.896980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.897155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.897336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.897361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.897516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.897699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.897726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.897932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.898305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.898751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.898993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.899168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.899340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.899365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.899544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.899750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.899775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.899945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.900323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.900770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.900965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.901145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.901344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.901369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.901519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.901693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.901718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.901926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.902261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.902670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.902846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.903055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.903211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.903236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.903413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.903610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.903635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.903817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.904006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.904032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.708 qpair failed and we were unable to recover it. 00:29:53.708 [2024-07-14 03:17:48.904189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.708 [2024-07-14 03:17:48.904403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.904428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.904636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.904860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.904902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.905088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.905289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.905317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.905546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.905701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.905725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.905903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.906298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.906716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.906896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.907099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.907253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.907295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.907472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.907706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.907734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.907940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.908313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.908679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.908887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.909043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.909253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.909277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.909479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.909657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.909682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.909856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.910034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.910059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.910242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.910564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.910631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.910872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.911284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.911639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.911836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.912038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.912248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.912273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.912451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.912634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.912659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.912827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.913024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.913050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.913262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.913542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.913594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.913793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.913976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.914030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.914260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.914447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.914472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.914618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.914798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.914823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.915014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.915373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.915760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.915971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.916173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.916363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.916391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.916592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.916794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.709 [2024-07-14 03:17:48.916819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.709 qpair failed and we were unable to recover it. 00:29:53.709 [2024-07-14 03:17:48.917036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.917240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.917265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.917490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.917697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.917721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.917927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.918084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.918109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.918315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.918600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.918649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.918878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.919203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.919607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.919814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.920003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.920202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.920244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.920443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.920620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.920645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.920800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.921213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.921627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.921834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.922013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.922301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.922354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.922584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.922761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.922786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.922962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.923344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.923745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.923966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.924134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.924288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.924312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.924520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.924666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.924709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.924958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.925140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.925165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.925347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.925583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.925645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.925847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.926283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.926689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.926895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.927100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.927447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.927501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.927722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.927965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.927991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.928196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.928504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.928560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.928775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.928969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.928998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.710 qpair failed and we were unable to recover it. 00:29:53.710 [2024-07-14 03:17:48.929201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.929393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.710 [2024-07-14 03:17:48.929420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.929644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.929803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.929827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.930030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.930303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.930353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.930574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.930750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.930776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.930959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.931140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.931164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.931329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.931549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.931574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.931774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.931996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.932025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.932254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.932613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.932678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.932897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.933283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.933662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.933849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.934039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.934217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.934259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.934458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.934753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.934784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.934999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.935195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.935223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.935383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.935621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.935675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.935833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.936217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.936637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.936860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.937083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.937279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.937307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.937504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.937741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.937769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.937966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.938172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.938197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.938389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.938663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.938715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.938890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.939072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.939097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.939304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.939546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.939594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.939801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.939961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.940000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.940187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.940518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.940581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.711 [2024-07-14 03:17:48.940774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.941002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.711 [2024-07-14 03:17:48.941028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.711 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.941239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.941469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.941536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.941738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.941917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.941942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.942110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.942425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.942486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.942707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.942883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.942931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.943134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.943311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.943336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.943542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.943779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.943807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.944020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.944264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.944318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.944546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.944752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.944777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.944960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.945138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.945163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.945386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.945582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.945607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.945783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.945993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.946019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.946224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.946403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.946427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.946570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.946721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.946746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.946924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.947127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.947152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.947335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.947565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.947613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.947848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.948218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.948703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.948957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.949127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.949377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.949429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.949627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.949823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.949850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.950050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.950231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.950260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.950440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.950608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.950633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.950807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.951078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.951127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.951332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.951505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.951549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.951766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.952042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.952090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.952316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.952604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.952662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.952903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.953286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.953701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.953950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.954150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.954345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.954373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.712 qpair failed and we were unable to recover it. 00:29:53.712 [2024-07-14 03:17:48.954549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.712 [2024-07-14 03:17:48.954726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.954759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.954957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.955138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.955164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.955358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.955554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.955583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.955774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.955986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.956014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.956225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.956405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.956431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.956614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.956829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.956856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.957095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.957381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.957405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.957609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.957795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.713 [2024-07-14 03:17:48.957830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.713 qpair failed and we were unable to recover it. 00:29:53.713 [2024-07-14 03:17:48.958049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.958214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.958240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.958395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.958604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.958628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.958810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.958990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.959021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.959199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.959396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.959423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.959622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.959853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.959884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.960040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.960194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.960218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.960403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.960558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.960585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.960799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.961225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.961674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.961903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.962121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.962322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.962351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.962519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.962695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.962736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.962945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.963334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.963720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.963945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.964177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.964376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.964401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.964604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.964800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.964829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.965044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.965269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.965297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.965525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.965725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.965753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.965967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.966127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.966154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.966359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.966594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.966645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.966844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.967026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.967051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.982 qpair failed and we were unable to recover it. 00:29:53.982 [2024-07-14 03:17:48.967240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.982 [2024-07-14 03:17:48.967513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.967565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.967774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.967932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.967958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.968160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.968322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.968349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.968548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.968796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.968824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.969010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.969189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.969214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.969368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.969566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.969594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.969805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.969978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.970004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.970161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.970340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.970365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.970554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.970706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.970732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.970937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.971346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.971756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.971953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.972130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.972334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.972401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.972582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.972772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.972799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.973008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.973300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.973353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.973542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.973737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.973764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.973933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.974339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.974722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.974938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.975113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.975348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.975401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.975572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.975770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.975797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.975976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.976173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.976201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.976402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.976712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.976765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.977000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.977417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.977796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.977991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.978183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.978417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.978480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.978701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.978919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.978947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.979123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.979299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.979344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.979545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.979714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.979741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.983 [2024-07-14 03:17:48.979943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.980139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.983 [2024-07-14 03:17:48.980167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.983 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.980346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.980504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.980529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.980694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.980881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.980911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.981108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.981301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.981326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.981543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.981712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.981740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.981932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.982333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.982741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.982941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.983105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.983294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.983324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.983525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.983682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.983710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.983903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.984357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.984759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.984979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.985147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.985434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.985486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.985683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.985849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.985882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.986052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.986255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.986280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.986483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.986659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.986685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.986884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.987150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.987200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.987407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.987702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.987760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.987985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.988260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.988306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.988535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.988702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.988729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.988935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.989339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.989747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.989976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.990132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.990292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.990320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.990491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.990684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.990712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.990900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.991060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.991088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.991286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.991595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.991654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.991851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.992284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.992742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.992967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.984 qpair failed and we were unable to recover it. 00:29:53.984 [2024-07-14 03:17:48.993190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.984 [2024-07-14 03:17:48.993344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.993369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.993543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.993743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.993773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.993979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.994166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.994194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.994368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.994537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.994578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.994801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.995296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.995697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.995904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.996059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.996212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.996237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.996386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.996563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.996588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.996764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.996974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.997003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.997209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.997430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.997455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.997656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.997858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.997898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.998104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.998290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.998318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.998510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.998844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.998904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.999106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.999287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.999315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.999489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.999695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:48.999720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:48.999902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.000108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.000134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.000308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.000615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.000664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.000838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.001224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.001750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.001983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.002187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.002386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.002414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.002610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.002806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.002835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.003052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.003228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.003253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.003433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.003589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.003614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.003767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.003989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.004018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.004205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.004458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.004504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.004705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.004902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.004930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.005148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.005417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.005441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.005622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.005819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.005846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.985 [2024-07-14 03:17:49.006053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.006241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.985 [2024-07-14 03:17:49.006269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.985 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.006463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.006737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.006784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.006953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.007230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.007276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.007504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.007821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.007884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.008109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.008283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.008313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.008520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.008700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.008725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.008891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.009070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.009096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.009301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.009605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.009662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.009870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.010257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.010618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.010838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.011022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.011273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.011326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.011526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.011682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.011726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.011944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.012144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.012185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.986 qpair failed and we were unable to recover it. 00:29:53.986 [2024-07-14 03:17:49.012408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.012716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.986 [2024-07-14 03:17:49.012776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.012997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.013169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.013197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.013371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.013606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.013632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.013811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.013987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.014015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.014217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.014486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.014536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.014731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.014953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.014981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.015191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.015400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.015425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.015629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.015854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.015887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.016084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.016314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.016342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.016528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.016720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.016750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.016961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.017389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.017805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.017997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.987 [2024-07-14 03:17:49.018198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.018458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.987 [2024-07-14 03:17:49.018509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.987 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.018710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.018907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.018936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.019107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.019408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.019458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.019681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.019885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.019926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.020126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.020337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.020362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.020511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.020661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.020686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.020904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.021263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.021726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.021900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.022108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.022462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.022523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.022720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.022937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.022966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.023192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.023391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.023419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.023608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.023801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.023829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.024067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.024214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.024259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.024459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.024652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.024680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.024881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.025234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.025613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.025859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.026091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.026430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.026487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.026722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.027212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.027604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.027807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.028015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.028222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.028250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.028468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.028664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.028699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.028902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.029098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.029126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.029351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.029668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.029718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.029950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.030318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.030679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.030906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.031105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.031449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.031507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.031724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.031901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.031944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.988 qpair failed and we were unable to recover it. 00:29:53.988 [2024-07-14 03:17:49.032165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.032347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.988 [2024-07-14 03:17:49.032372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.032575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.032814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.032839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.033034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.033347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.033405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.033608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.033805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.033830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.034018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.034314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.034365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.034568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.034795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.034823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.034995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.035197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.035223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.035395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.035605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.035657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.035840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.036239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.036683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.036939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.037119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.037351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.037406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.037626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.037825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.037851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.038093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.038315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.038343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.038543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.038720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.038745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.038947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.039122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.039150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.039366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.039588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.039614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.039823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.040290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.040718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.040943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.041148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.041325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.041354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.041578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.041788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.041816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.042060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.042363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.042412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.042618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.042806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.042831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.043037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.043334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.043391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.043609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.043804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.043832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.044052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.044408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.044460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.044683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.044856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.044890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.045088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.045450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.045510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.045681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.045839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.045886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.046109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.046313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.046337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.989 qpair failed and we were unable to recover it. 00:29:53.989 [2024-07-14 03:17:49.046520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.989 [2024-07-14 03:17:49.046727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.046755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.046958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.047179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.047247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.047481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.047764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.047818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.048032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.048190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.048217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.048422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.048777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.048829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.049017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.049214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.049243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.049418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.049637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.049664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.049873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.050229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.050711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.050960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.051135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.051327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.051357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.051557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.051730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.051757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.051973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.052181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.052206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.052410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.052710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.052759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.052955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.053145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.053173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.053368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.053600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.053625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.053830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.054051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.054079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.054307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.054575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.054605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.054816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.055237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.055679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.055927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.056082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.056263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.056290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.056509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.056904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.056932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.057129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.057303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.057330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.057536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.057722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.057746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.057903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.058315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.058706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.058953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.059152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.059301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.059328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.059499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.059719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.059744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.990 [2024-07-14 03:17:49.059942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.060162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.990 [2024-07-14 03:17:49.060191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.990 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.060415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.060649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.060701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.060901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.061132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.061160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.061362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.061672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.061734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.061938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.062290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.062698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.062948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.063121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.063319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.063349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.063545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.063752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.063780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.063986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.064187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.064211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.064423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.064576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.064602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.064836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.065066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.065094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.065321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.065641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.065704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.065931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.066109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.066135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.066361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.066685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.066740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.066961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.067189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.067245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.067448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.067718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.067767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.067971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.068170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.068198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.068422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.068592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.068621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.068822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.068985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.069011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.069190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.069388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.069413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.069640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.069830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.069858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.070041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.070279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.070336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.070568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.070739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.070767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.070970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.071169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.071197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.071425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.071731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.071788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.991 [2024-07-14 03:17:49.071980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.072277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.991 [2024-07-14 03:17:49.072329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.991 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.072547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.072773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.072801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.073029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.073209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.073236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.073436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.073740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.073795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.074017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.074243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.074291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.074509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.074680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.074707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.074901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.075358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.075739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.075937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.076136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.076311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.076351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.076570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.076731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.076759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.076966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.077170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.077196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.077403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.077671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.077728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.077904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.078330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.078689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.078975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.079196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.079543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.079593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.079789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.080244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.080655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.080908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.081104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.081328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.081355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.081520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.081714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.081741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.081929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.082099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.082127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.082330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.082551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.082620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.082809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.082981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.083009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.083179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.083517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.083579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.083807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.084228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.084587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.084847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.085054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.085247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.085274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.085471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.085667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.085691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.992 qpair failed and we were unable to recover it. 00:29:53.992 [2024-07-14 03:17:49.085896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.992 [2024-07-14 03:17:49.086102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.086130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.086337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.086533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.086561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.086783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.086979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.087005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.087228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.087618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.087678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.087875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.088074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.088101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.088282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.088554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.088608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.088830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.089223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.089621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.089873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.090071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.090263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.090290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.090492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.090684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.090736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.090959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.091336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.091736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.091940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.092137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.092319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.092343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.092531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.092721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.092753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.092961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.093163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.093191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.093351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.093604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.093659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.093875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.094256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.094621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.094816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.095029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.095221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.095272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.095449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.095623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.095649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.095820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.096228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.096650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.096849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.097058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.097212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.097237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.097464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.097730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.097784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.097977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.098160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.098185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.098363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.098536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.098561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.098768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.098989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.099017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.993 qpair failed and we were unable to recover it. 00:29:53.993 [2024-07-14 03:17:49.099226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.993 [2024-07-14 03:17:49.099402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.099426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.099599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.099791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.099819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.100056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.100257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.100287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.100514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.100715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.100742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.100913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.101268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.101640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.101899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.102095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.102295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.102323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.102546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.102739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.102764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.102912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.103325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.103743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.103994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.104228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.104452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.104479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.104696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.104912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.104937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.105140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.105459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.105513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.105692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.105887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.105917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.106117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.106401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.106452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.106689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.106871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.106897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.107053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.107256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.107281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.107488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.107809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.107859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.108092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.108325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.108366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.108601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.108803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.108831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.109023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.109251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.109276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.109480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.109706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.109734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.109908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.110083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.110108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.110315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.110574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.110603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.110798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.110988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.111018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.111220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.111420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.111446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.111650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.111850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.111879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.112059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.112233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.112258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.112437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.112583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.994 [2024-07-14 03:17:49.112607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.994 qpair failed and we were unable to recover it. 00:29:53.994 [2024-07-14 03:17:49.112778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.112995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.113024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.113188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.113396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.113447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.113639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.113869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.113898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.114055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.114289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.114352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.114580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.114753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.114783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.115019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.115174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.115199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.115420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.115703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.115728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.115932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.116128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.116156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.116354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.116589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.116660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.116882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.117305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.117753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.117959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.118169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.118476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.118537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.118758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.118961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.118989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.119202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.119361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.119386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.119561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.119735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.119759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.119943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.120185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.120249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.120471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.120705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.120757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.120930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.121098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.121125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.121320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.121545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.121602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.121798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.121993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.122021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.122241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.122408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.122435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.122668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.122874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.122900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.123147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.123340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.123387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.123598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.123770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.123797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.123970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.124190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.124243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.124437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.124661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.124722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.124933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.125366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.125742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.125960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.995 qpair failed and we were unable to recover it. 00:29:53.995 [2024-07-14 03:17:49.126158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.126314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.995 [2024-07-14 03:17:49.126340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.126521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.126725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.126752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.126952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.127207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.127256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.127448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.127718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.127767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.127972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.128261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.128310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.128530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.128711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.128738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.128940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.129142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.129167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.129344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.129542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.129569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.129764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.129989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.130041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.130213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.130432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.130459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.130679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.130877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.130904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.131115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.131383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.131439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.131617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.131786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.131814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.132020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.132201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.132226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.132399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.132559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.132623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.132793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.132990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.133018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.133215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.133468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.133516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.133742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.133992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.134053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.134252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.134572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.134633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.134851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.135306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.135797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.135968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.136170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.136532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.136590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.136811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.137195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.137588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.137764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.996 [2024-07-14 03:17:49.137938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.138115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.996 [2024-07-14 03:17:49.138140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.996 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.138327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.138618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.138667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.138913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.139290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.139664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.139932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.140109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.140268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.140293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.140546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.140773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.140801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.141028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.141206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.141233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.141440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.141623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.141647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.141860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.142236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.142592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.142784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.142984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.143158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.143184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.143391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.143564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.143589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.143767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.143992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.144020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.144238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.144424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.144449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.144593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.144790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.144814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.145002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.145181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.145206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.145384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.145539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.145565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.145791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.146100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.146166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.146360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.146618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.146669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.146893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.147098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.147123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.147307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.147564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.147589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.147789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.147972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.148010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.148187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.148338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.148364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.148539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.148713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.148739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.148902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.149087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.149113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.149277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.149540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.149591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.149811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.149991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.150020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.150246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.150597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.150643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.150841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.151052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.151078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.997 qpair failed and we were unable to recover it. 00:29:53.997 [2024-07-14 03:17:49.151255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.151427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.997 [2024-07-14 03:17:49.151452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.151623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.151766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.151793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.151973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.152198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.152263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.152529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.152725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.152753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.153023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.153292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.153343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.153567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.153836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.153864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.154067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.154357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.154422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.154638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.154920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.154950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.155181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.155554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.155613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.155810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.155978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.156006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.156216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.156414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.156439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.156591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.156775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.156800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.156979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.157394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.157794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.157998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.158147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.158321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.158363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.158560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.158818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.158844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.159002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.159153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.159198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.159406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.159599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.159624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.159796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.160248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.160603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.160821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.161045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.161265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.161351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.161551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.161729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.161754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.161969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.162118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.162143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.162345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.162616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.162667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.162842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.163299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.163705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.163937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.164135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.164443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.164504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.164696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.164886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.164915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.998 qpair failed and we were unable to recover it. 00:29:53.998 [2024-07-14 03:17:49.165109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.998 [2024-07-14 03:17:49.165363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.165415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.165592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.165766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.165791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.166021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.166323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.166373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.166617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.166813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.166841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.167067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.167269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.167294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.167478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.167652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.167677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.167876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.168327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.168667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.168878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.169105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.169370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.169421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.169620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.169820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.169845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.170007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.170181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.170206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.170411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.170702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.170753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.170982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.171271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.171319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.171519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.171682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.171709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.171883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.172281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.172665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.172875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.173101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.173417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.173476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.173675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.173881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.173907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.174069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.174274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.174298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.174447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.174661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.174725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.174925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.175360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.175781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.175987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.176149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.176374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.176401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.176595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.176785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.176812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.177023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.177204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.177233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.177419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.177621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.177645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.177862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.178044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.178069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.178248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.178423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:53.999 [2024-07-14 03:17:49.178448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:53.999 qpair failed and we were unable to recover it. 00:29:53.999 [2024-07-14 03:17:49.178636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.178807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.178832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.179026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.179380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.179434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.179640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.179871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.179900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.180084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.180264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.180288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.180471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.180743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.180793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.181016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.181295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.181346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.181586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.181831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.181856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.182021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.182399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.182779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.182987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.183160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.183307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.183332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.183478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.183655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.183697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.183909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.184273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.184628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.184851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.185037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.185242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.185267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.185480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.185675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.185702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.185896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.186090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.186118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.186359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.186546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.186586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.186756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.186989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.187014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.187172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.187382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.187410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.187614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.187787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.187812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.187992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.188213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.188280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.188477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.188706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.188733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.188910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.189269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.000 [2024-07-14 03:17:49.189663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.000 [2024-07-14 03:17:49.189899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.000 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.190125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.190279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.190304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.190474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.190659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.190720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.190892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.191282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.191669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.191843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.192009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.192282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.192331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.192507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.192681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.192706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.192883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.193049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.193076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.193270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.193612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.193675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.193909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.194281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.194707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.194910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.195116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.195339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.195398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.195595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.195767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.195795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.196019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.196247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.196301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.196475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.196653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.196679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.196829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.197255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.197804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.197983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.198190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.198354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.198381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.198667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.198835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.198860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.199051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.199207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.199232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.199407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.199580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.199605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.199816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.200263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.200643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.200851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.201047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.201225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.201251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.201429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.201600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.201625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.201813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.202005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.202033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.001 [2024-07-14 03:17:49.202260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.202432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.001 [2024-07-14 03:17:49.202460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.001 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.202653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.202829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.202853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.203088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.203287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.203314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.203506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.203792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.203842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.204045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.204247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.204272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.204489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.204665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.204691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.204860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.205252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.205701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.205906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.206116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.206296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.206321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.206529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.206709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.206734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.206891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.207131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.207155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.207357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.207626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.207675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.207876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.208241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.208700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.208922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.209120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.209273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.209298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.209501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.209677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.209701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.209858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.210219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.210545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.210790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.210992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.211165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.211190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.211382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.211572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.211600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.211797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.212229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.212818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.212997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.213141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.213320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.213345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.213552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.213745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.213773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.213937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.214332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.214683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.214925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.002 qpair failed and we were unable to recover it. 00:29:54.002 [2024-07-14 03:17:49.215154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.002 [2024-07-14 03:17:49.215337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.215362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.215538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.215776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.215805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.216037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.216277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.216333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.216528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.216696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.216721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.216920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.217397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.217763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.217965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.218164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.218452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.218513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.218728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.218903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.218929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.219117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.219296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.219321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.219508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.219704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.219732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.219939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.220306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.220763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.220999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.221192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.221372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.221397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.221579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.221729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.221756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.221940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.222328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.222697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.222930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.223130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.223343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.223369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.223541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.223745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.223769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.223937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.224114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.003 [2024-07-14 03:17:49.224151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.003 qpair failed and we were unable to recover it. 00:29:54.003 [2024-07-14 03:17:49.224359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.224540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.224565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.224744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.224954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.224981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.225153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.225305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.225329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.225500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.225678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.225704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.225884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.226244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.226597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.226803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.226960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.227162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.227193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.227396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.227602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.227630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.227863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.228254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.228607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.228811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.228969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.229321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.229742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.229969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.230180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.230349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.230378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.230575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.230774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.230801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.230991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.231443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.231772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.231974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.232156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.232469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.232520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.232744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.232938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.232966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.233132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.233306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.233330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.233509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.233711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.233739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.233934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.234098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.234127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.234322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.234559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.234607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.234829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.235045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.235074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.278 qpair failed and we were unable to recover it. 00:29:54.278 [2024-07-14 03:17:49.235277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.278 [2024-07-14 03:17:49.235532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.235590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.235792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.235970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.235996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.236152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.236406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.236433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.236656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.236810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.236834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.237027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.237206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.237233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.237418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.237599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.237640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.237841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.238264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.238616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.238843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.239020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.239192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.239217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.239392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.239548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.239579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.239779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.239981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.240010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.240234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.240471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.240520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.240718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.240881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.240910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.241072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.241331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.241381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.241614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.241791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.241816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.242024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.242241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.242268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.242463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.242774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.242832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.243049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.243254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.243279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.243428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.243631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.243690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.243892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.244334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.244740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.244966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.245135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.245283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.245308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.245463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.245664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.245689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.245892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.246095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.246122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.246344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.246578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.246638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.246825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.247254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.247704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.247898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.279 [2024-07-14 03:17:49.248048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.248228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.279 [2024-07-14 03:17:49.248253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.279 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.248434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.248637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.248662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.248872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.249296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.249705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.249924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.250132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.250302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.250327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.250509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.250677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.250701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.250907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.251064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.251092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.251309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.251623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.251675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.251875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.252094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.252123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.252429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.252598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.252626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.252835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.253227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.253697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.253906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.254107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.254273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.254301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.254519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.254701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.254726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.254936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.255336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.255716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.255937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.256136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.256421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.256479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.256710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.256900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.256929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.257141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.257292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.257318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.257480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.257658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.257683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.257854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.258294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.258731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.258959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.259111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.259290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.259314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.259490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.259687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.259714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.259893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.260096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.260121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.260319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.260521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.260546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.260740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.260983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.261012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.280 qpair failed and we were unable to recover it. 00:29:54.280 [2024-07-14 03:17:49.261206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.280 [2024-07-14 03:17:49.261508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.261558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.261756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.261986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.262012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.262191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.262390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.262415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.262569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.262747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.262772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.262952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.263381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.263733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.263953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.264120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.264324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.264349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.264561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.264723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.264747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.264936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.265141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.265167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.265383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.265551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.265576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.265791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.265992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.266018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.266197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.266376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.266402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.266641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.266828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.266855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.267093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.267264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.267289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.267462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.267617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.267642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.267845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.268211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.268564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.268793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.268954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.269324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.269729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.269937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.270123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.270271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.270297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.270478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.270650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.270675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.270892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.271278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.271632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.271808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.271993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.272165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.272190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.272387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.272582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.272610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.272810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.273004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.273032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.273237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.273442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.273468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.281 qpair failed and we were unable to recover it. 00:29:54.281 [2024-07-14 03:17:49.273623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.281 [2024-07-14 03:17:49.273797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.273823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.274014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.274393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.274739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.274950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.275113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.275284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.275308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.275469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.275644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.275671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.275854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.276208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.276592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.276815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.277009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.277363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.277800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.277982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.278136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.278311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.278339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.278536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.278707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.278734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.278928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.279346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.279730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.279960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.280153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.280483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.280534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.280765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.280957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.280985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.281180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.281332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.281357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.281539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.281689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.281714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.281894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.282328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.282777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.282984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.283164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.283364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.282 [2024-07-14 03:17:49.283405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.282 qpair failed and we were unable to recover it. 00:29:54.282 [2024-07-14 03:17:49.283577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.283781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.283809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.284000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.284323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.284694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.284879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.285083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.285386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.285438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.285637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.285854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.285890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.286123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.286373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.286429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.286605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.286804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.286832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.287041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.287258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.287285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.287469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.287696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.287754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.287946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.288162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.288188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.288389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.288586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.288614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.288850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.289218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.289569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.289747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.289946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.290153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.290181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.290378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.290635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.290694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.290891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.291321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.291719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.291971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.292179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.292422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.292474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.292683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.292877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.292905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.293103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.293274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.293299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.293476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.293652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.293677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.293824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.294244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.294637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.294813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.294996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.295289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.295340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.295546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.295700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.295725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.283 [2024-07-14 03:17:49.295905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.296081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.283 [2024-07-14 03:17:49.296106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.283 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.296281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.296461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.296485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.296694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.296889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.296918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.297112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.297315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.297340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.297488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.297640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.297664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.297846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.298223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.298610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.298798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.299017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.299393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.299442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.299660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.299864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.299901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.300076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.300223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.300248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.300451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.300656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.300684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.300907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.301345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.301771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.301964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.302170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.302345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.302374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.302553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.302708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.302733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.302883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.303297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.303724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.303965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.304142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.304322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.304348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.304528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.304732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.304758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.304915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.305291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.305608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.305811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.305980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.306125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.306150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.306387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.306620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.306645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.306823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.306993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.307020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.307224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.307396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.307421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.307630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.307805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.307830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.308007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.308161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.284 [2024-07-14 03:17:49.308186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.284 qpair failed and we were unable to recover it. 00:29:54.284 [2024-07-14 03:17:49.308339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.308513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.308556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.308729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.308946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.308974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.309172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.309324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.309349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.309554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.309731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.309756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.309913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.310336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.310761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.310957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.311141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.311287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.311312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.311487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.311668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.311694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.311854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.312241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.312708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.312906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.313130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.313415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.313468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.313698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.313892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.313920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.314117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.314328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.314379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.314608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.314813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.314841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.315053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.315233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.315258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.315463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.315668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.315692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.315888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.316269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.316685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.316906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.317084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.317232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.317256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.317457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.317661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.317686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.317859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.318280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.318767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.318941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.319099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.319280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.319305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.319514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.319694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.319718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.319862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.320257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.285 [2024-07-14 03:17:49.320655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.285 [2024-07-14 03:17:49.320852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.285 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.321040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.321217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.321244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.321421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.321601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.321626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.321778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.322290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.322620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.322804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.322988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.323261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.323312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.323508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.323719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.323745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.323948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.324308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.324655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.324829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.324984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.325209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.325261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.325492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.325665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.325689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.325862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.326243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.326633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.326822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.327030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.327211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.327235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.327392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.327670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.327719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.327945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.328131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.328159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.328361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.328650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.328701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.328902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.329295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.329673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.329889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.330097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.330342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.330371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.330601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.330779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.330804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.331007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.331238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.331293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.286 qpair failed and we were unable to recover it. 00:29:54.286 [2024-07-14 03:17:49.331568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.331761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.286 [2024-07-14 03:17:49.331789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.331984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.332214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.332264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.332489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.332768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.332822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.333013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.333195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.333221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.333426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.333720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.333778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.334005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.334163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.334191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.334389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.334592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.334617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.334798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.334976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.335001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.335199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.335351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.335376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.335558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.335759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.335784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.335935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.336102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.336130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.336401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.336683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.336732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.336936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.337140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.337166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.337346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.337669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.337722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.337919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.338095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.338121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.338376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.338684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.338739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.338939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.339295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.339356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.339550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.339722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.339747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.339949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.340211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.340263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.340461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.340837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.340909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.341105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.341345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.341397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.341597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.341817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.341845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.342015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.342195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.342236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.342456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.342632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.342657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.342831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.343040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.343065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.343268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.343554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.343605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.343808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.344206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.344588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.344797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.345035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.345237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.345265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.345459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.345654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.345680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-07-14 03:17:49.345835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.287 [2024-07-14 03:17:49.346061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.346089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.346281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.346450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.346479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.346703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.346898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.346925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.347098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.347295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.347323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.347543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.347711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.347739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.347929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.348163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.348188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.348357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.348581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.348609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.348850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.349253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.349608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.349810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.349962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.350322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.350675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.350905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.351135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.351328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.351357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.351530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.351720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.351747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.351947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.352127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.352152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.352453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.352694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.352721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.352923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.353326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.353761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.353956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.354103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.354257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.354283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.354462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.354646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.354671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.354903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.355327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.355776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.355985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.356185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.356399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.356451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.356672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.356838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.356874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.357092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.357300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.357325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.357525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.357694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.357723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.357909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.358120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.358145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.358324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.358583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.288 [2024-07-14 03:17:49.358635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-07-14 03:17:49.358836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.359224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.359581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.359785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.359955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.360310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.360664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.360898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.361130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.361432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.361486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.361656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.361861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.361897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.362117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.362269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.362295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.362500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.362677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.362702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.362880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.363100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.363127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.363355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.363623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.363650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.363841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.364245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.364644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.364905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.365078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.365279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.365307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.365504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.365683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.365707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.365911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.366334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.366693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.366946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.367144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.367321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.367346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.367500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.367651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.367677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.367906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.368281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.368694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.368877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.369032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.369180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.369206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.369404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.369593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.369620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.369799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.369976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.370007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.370211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.370405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.370433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.370636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.370776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.370800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-07-14 03:17:49.371025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.371208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.289 [2024-07-14 03:17:49.371234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.371415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.371618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.371643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.371791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.371974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.372003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.372224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.372438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.372487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.372657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.372873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.372901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.373095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.373279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.373306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.373511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.373663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.373688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.373837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.374251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.374633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.374819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.375025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.375318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.375367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.375587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.375817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.375842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.376030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.376211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.376236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.376412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.376606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.376634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.376831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.377219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.377593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.377822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.378029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.378294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.378345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.378539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.378712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.378737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.378901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.379368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.379735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.379934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.380109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.380281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.380306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.380454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.380631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.380658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.380888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.381288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.381641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.381871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.382047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.382218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.382244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.382432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.382748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.382799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.290 qpair failed and we were unable to recover it. 00:29:54.290 [2024-07-14 03:17:49.383074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.383270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.290 [2024-07-14 03:17:49.383295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.383500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.383684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.383708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.383892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.384093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.384120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.384303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.384614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.384663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.384862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.385089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.385117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.385322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.385562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.385623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.385812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.385984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.386010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.386161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.386366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.386390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.386622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.386816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.386843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.387055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.387387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.387766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.387999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.388201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.388373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.388397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.388551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.388727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.388752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.388900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.389295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.389681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.389943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.390135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.390282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.390307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.390468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.390705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.390733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.390967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.391359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.391792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.391962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.392141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.392352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.392412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.392593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.392772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.392816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.393017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.393228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.393279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.393485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.393686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.393711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.393857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.394071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.394099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.291 [2024-07-14 03:17:49.394289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.394537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.291 [2024-07-14 03:17:49.394593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.291 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.394822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.395186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.395633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.395850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.396081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.396373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.396424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.396623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.396838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.396872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.397072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.397216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.397241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.397420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.397675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.397726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.397936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.398315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.398705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.398961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.399137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.399310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.399336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.399519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.399695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.399721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.399926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.400393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.400755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.400957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.401106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.401278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.401303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.401503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.401701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.401729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.401906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.402284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.402655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.402889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.403071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.403251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.403276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.403430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.403633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.403657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.403860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.404262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.404649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.404876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.405075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.405256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.405281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.405433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.405638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.405663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.405848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.406298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.406689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.406877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.292 qpair failed and we were unable to recover it. 00:29:54.292 [2024-07-14 03:17:49.407085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.407406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.292 [2024-07-14 03:17:49.407457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.407656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.407890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.407917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.408097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.408278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.408303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.408476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.408777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.408828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.409032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.409252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.409302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.409522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.409788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.409816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.410050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.410305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.410357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.410550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.410698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.410725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.410949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.411119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.411148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.411344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.411545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.411573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.411773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.411979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.412004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.412181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.412369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.412396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.412600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.412796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.412823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.413031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.413211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.413235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.413388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.413607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.413635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.413813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.413994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.414019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.414199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.414400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.414462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.414649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.414882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.414912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.415106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.415308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.415332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.415534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.415716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.415740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.415922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.416209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.416259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.416481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.416771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.416819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.417043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.417240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.417300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.417524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.417701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.417726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.417898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.418079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.418105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.418335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.418614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.418665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.418896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.419066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.419096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.419320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.419580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.419631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.419813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.419998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.420041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.420242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.420464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.420491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.420686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.420840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.420874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.293 qpair failed and we were unable to recover it. 00:29:54.293 [2024-07-14 03:17:49.421098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.293 [2024-07-14 03:17:49.421310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.421337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.421527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.421746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.421771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.421972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.422225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.422276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.422510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.422760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.422816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.423047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.423251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.423301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.423541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.423708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.423736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.423928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.424227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.424282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.424476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.424650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.424675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.424855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.425284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.425786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.425990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.426201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.426515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.426566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.426770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.426970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.426999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.427197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.427487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.427537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.427762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.427958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.427987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.428206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.428492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.428540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.428740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.428943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.428971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.429145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.429512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.429561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.429763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.429921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.429948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.430149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.430434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.430485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.430689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.430863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.430898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.431129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.431351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.431375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.431576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.431778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.431804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.432030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.432268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.432322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.432530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.432683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.432708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.432854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.433080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.433108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.433332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.433631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.433686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.433908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.434283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.434740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.434979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.435185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.435465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.294 [2024-07-14 03:17:49.435519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.294 qpair failed and we were unable to recover it. 00:29:54.294 [2024-07-14 03:17:49.435715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.435893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.435917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.436078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.436279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.436302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.436504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.436695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.436721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.436924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.437175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.437235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.437439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.437752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.437805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.438007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.438157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.438182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.438403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.438646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.438696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.438903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.439091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.439119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.439313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.439503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.439531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.439798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.440029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.440059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.440258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.440637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.440699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.440895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.441058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.441086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.441281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.441616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.441667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.441872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.442272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.442657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.442854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.443059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.443257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.443283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.443458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.443630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.443655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.443832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.444282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.444732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.444954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.445171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.445437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.445489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.445684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.445878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.445904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.446076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.446253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.446277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.446428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.446633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.446674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.446880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.447081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.447106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.295 qpair failed and we were unable to recover it. 00:29:54.295 [2024-07-14 03:17:49.447262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.295 [2024-07-14 03:17:49.447403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.447428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.447658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.447873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.447902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.448104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.448299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.448326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.448526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.448701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.448727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.448888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.449238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.449603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.449773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.449950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.450129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.450153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.450380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.450618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.450646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.450839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.451269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.451656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.451832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.452048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.452196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.452220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.452430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.452689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.452717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.452894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.453356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.453740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.453919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.454092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.454310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.454338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.454532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.454725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.454752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.454949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.455308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.455706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.455957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.456162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.456367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.456391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.456571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.456750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.456776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.456988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.457336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.457751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.457964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.458143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.458315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.458340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.458538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.458713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.458737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.458913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.459108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.459136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.296 [2024-07-14 03:17:49.459304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.459607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.296 [2024-07-14 03:17:49.459659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.296 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.459885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.460286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.460677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.460864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.461097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.461368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.461415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.461617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.461767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.461792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.461983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.462189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.462213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.462425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.462643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.462709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.462945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.463369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.463696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.463925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.464087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.464410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.464461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.464661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.464858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.464893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.465088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.465322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.465350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.465552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.465771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.465799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.465972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.466399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.466775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.466984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.467195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.467385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.467412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.467612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.467783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.467811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.468040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.468205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.468232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.468429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.468620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.468647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.468877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.469060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.469085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.469310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.469571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.469596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.469802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.469998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.470027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.470198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.470420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.470466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.470672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.470838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.470871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.471063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.471256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.471283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.471473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.471668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.471693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.471896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.472252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.297 qpair failed and we were unable to recover it. 00:29:54.297 [2024-07-14 03:17:49.472687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.297 [2024-07-14 03:17:49.472887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.473083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.473281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.473305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.473485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.473662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.473687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.473877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.474285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.474704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.474926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.475131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.475361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.475415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.475619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.475777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.475805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.476003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.476242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.476294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.476528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.476812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.476863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.477086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.477280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.477327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.477585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.477808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.477835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.478024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.478239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.478287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.478506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.478745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.478798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.479027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.479205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.479233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.479430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.479691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.479743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.479933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.480239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.480297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.480497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.480811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.480877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.481085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.481307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.481335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.481530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.481728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.481756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.481952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.482359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.482716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.482942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.483133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.483362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.483389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.483557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.483756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.483781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.484007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.484237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.484288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.484464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.484691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.484744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.484944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.485225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.485274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.485467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.485780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.485836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.486043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.486304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.486353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.486581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.486779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.298 [2024-07-14 03:17:49.486806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.298 qpair failed and we were unable to recover it. 00:29:54.298 [2024-07-14 03:17:49.487010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.487185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.487212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.487436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.487672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.487697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.487931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.488233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.488285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.488486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.488638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.488665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.488862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.489349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.489734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.489926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.490093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.490292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.490349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.490557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.490737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.490762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.491003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.491151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.491177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.491402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.491701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.491750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.491969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.492362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.492771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.492971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.493167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.493387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.493437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.493603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.493827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.493855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.494060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.494236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.494263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.494484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.494892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.495095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.495427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.495487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.495680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.495888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.495916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.496115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.496424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.496478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.496702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.496896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.496924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.497091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.497323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.497380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.497613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.497813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.497840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.498026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.498281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.498331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.498557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.498912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.498941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.299 qpair failed and we were unable to recover it. 00:29:54.299 [2024-07-14 03:17:49.499146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.299 [2024-07-14 03:17:49.499345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.499372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.499572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.499797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.499846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.500074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.500296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.500323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.500490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.500727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.500772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.500976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.501174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.501202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.501403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.501576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.501603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.501789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.501979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.502013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.502211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.502486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.502550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.502772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.502946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.502974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.503183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.503384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.503409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.503578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.503767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.503794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.503963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.504158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.504223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.504449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.504600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.504645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.504810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.504994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.505037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.505266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.505442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.505469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.505663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.505855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.505885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.506037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.506301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.506355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.506561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.506712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.506737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.506888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.507097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.507124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.507352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.507618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.507668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.507883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.508283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.508778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.508955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.509152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.509431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.509481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.509711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.509917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.509946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.510143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.510287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.510318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.510504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.510828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.510900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.511120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.511305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.511330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.511531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.511738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.511804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.512017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.512214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.512243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.512444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.512801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.300 [2024-07-14 03:17:49.512860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.300 qpair failed and we were unable to recover it. 00:29:54.300 [2024-07-14 03:17:49.513106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.513302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.513330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.301 qpair failed and we were unable to recover it. 00:29:54.301 [2024-07-14 03:17:49.513525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.513722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.513750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.301 qpair failed and we were unable to recover it. 00:29:54.301 [2024-07-14 03:17:49.513956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.514165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.514213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.301 qpair failed and we were unable to recover it. 00:29:54.301 [2024-07-14 03:17:49.514439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.514760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.301 [2024-07-14 03:17:49.514818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.301 qpair failed and we were unable to recover it. 00:29:54.301 [2024-07-14 03:17:49.515048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.515428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.515818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.515993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.516171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.516348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.516374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.516550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.516752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.516776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.516934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.517080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.517107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.517312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.517489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.517513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.572 qpair failed and we were unable to recover it. 00:29:54.572 [2024-07-14 03:17:49.517695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.572 [2024-07-14 03:17:49.517871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.517897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.518043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.518220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.518245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.518422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.518600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.518625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.518809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.519230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.519659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.519852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.520035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.520340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.520390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.520592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.520763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.520791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.521017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.521196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.521222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.521442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.521638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.521666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.521878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.522081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.522109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.522313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.522468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.522493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.522721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.522988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.523016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.523213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.523382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.523411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.523579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.523780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.523808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.524031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.524240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.524293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.524473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.524629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.524653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.524831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.525311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.525731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.525954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.526160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.526384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.526452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.526747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.526981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.527007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.527178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.527443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.527496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.527676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.527827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.527877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.528101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.528357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.528416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.528642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.528826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.528853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.529053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.529391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.529451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.529629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.529807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.529851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.530034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.530225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.530252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.530443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.530759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.530829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.531044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.531309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.573 [2024-07-14 03:17:49.531361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.573 qpair failed and we were unable to recover it. 00:29:54.573 [2024-07-14 03:17:49.531560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.531733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.531761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.531987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.532190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.532275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.532471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.532767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.532816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.533013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.533216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.533243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.533429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.533607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.533648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.533840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.534063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.534091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.534279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.534568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.534619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.534824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.535229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.535594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.535835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.536050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.536371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.536422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.536618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.536813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.536841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.537012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.537156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.537196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.537418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.537601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.537628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.537827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.538024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.538051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.538268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.538483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.538510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.538735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.539230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.539675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.539938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.540150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.540332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.540375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.540603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.540830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.540857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.541090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.541329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.541381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.541589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.541768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.541795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.541995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.542215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.542264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.542467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.542792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.542849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.543079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.543244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.543273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.543469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.543667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.543694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.543893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.544088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.544116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.544345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.544567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.544595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.544765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.544990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.545018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.574 qpair failed and we were unable to recover it. 00:29:54.574 [2024-07-14 03:17:49.545196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.574 [2024-07-14 03:17:49.545444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.545494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.545699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.545898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.545927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.546125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.546299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.546364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.546563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.546761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.546785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.546986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.547251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.547301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.547515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.547707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.547736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.547916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.548254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.548672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.548875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.549100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.549490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.549562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.549756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.550006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.550057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.550269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.550610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.550663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.550887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.551107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.551135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.551358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.551692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.551754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.551961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.552148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.552174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.552542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.552838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.552874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.553072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.553323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.553381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.553601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.553750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.553792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.554009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.554188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.554213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.554443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.554688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.554739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.554962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.555377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.555791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.555997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.556188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.556483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.556536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.556758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.556961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.556990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.557181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.557372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.557400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.557600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.557787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.557813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.557997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.558199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.558227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.558424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.558628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.558653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.558877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.559059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.559084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.559290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.559480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.575 [2024-07-14 03:17:49.559505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.575 qpair failed and we were unable to recover it. 00:29:54.575 [2024-07-14 03:17:49.559718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.560275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.560701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.560887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.561072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.561245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.561275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.561475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.561642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.561669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.561889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.562315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.562743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.562967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.563150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.563327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.563370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.563538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.563723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.563751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.563954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.564377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.564760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.564969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.565139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.565400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.565463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.565638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.565830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.565857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.566098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.566336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.566396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.566563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.566753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.566781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.566980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.567226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.567277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.567472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.567695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.567722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.567918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.568223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.568278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.568501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.568692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.568720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.568917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.569245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.569293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.569495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.569692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.569719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.569915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.570063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.570110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.570307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.570620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.570677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.570877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.571326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.571752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.571978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.576 qpair failed and we were unable to recover it. 00:29:54.576 [2024-07-14 03:17:49.572198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.572426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.576 [2024-07-14 03:17:49.572481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.572715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.572909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.572938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.573096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.573332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.573381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.573577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.573777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.573804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.574006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.574274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.574327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.574533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.574758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.574813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.575059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.575283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.575311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.575511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.575763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.575820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.575996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.576219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.576247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.576444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.576643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.576668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.576873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.577294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.577677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.577927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.578106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.578300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.578328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.578520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.578839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.578908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.579109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.579319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.579348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.579575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.579743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.579770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.579972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.580170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.580198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.580393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.580725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.580770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.580977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.581290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.581350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.581556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.581778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.581806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.581983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.582140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.582165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.582319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.582499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.582526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.582749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.582991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.583049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.583222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.583451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.583475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.583634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.583830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.583857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.584042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.584262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.584290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.577 qpair failed and we were unable to recover it. 00:29:54.577 [2024-07-14 03:17:49.584468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.577 [2024-07-14 03:17:49.584617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.584641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.584801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.585201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.585602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.585798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.585989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.586226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.586285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.586505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.586724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.586752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.586949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.587312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.587682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.587938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.588139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.588458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.588507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.588716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.588894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.588921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.589120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.589312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.589339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.589541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.589740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.589768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.589971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.590248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.590303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.590509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.590711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.590736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.590939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.591217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.591243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.591447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.591645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.591701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.591908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.592102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.592130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.592330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.592551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.592599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.592801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.592996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.593024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.593203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.593401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.593429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.593631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.593820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.593848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.594051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.594259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.594283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.594462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.594660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.594685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.594883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.595083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.595112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.595317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.595533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.595560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.595783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.595980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.596009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.596211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.596433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.596461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.596633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.596834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.596861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.597072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.597260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.597288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.578 [2024-07-14 03:17:49.597489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.597692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.578 [2024-07-14 03:17:49.597717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.578 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.597926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.598287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.598339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.598564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.598721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.598746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.598926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.599126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.599152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.599354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.599585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.599613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.599839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.600283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.600761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.600975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.601132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.601335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.601360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.601543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.601773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.601797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.601944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.602137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.602165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.602332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.602674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.602727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.602948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.603123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.603151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.603309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.603599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.603649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.603875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.604278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.604694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.604912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.605085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.605276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.605301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.605478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.605623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.605647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.605853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.606287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.606718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.606974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.607158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.607389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.607440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.607631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.607845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.607885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.608090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.608285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.608313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.608541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.608790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.608851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.609059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.609273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.609342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.609539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.609734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.609761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.609927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.610125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.610153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.610366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.610558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.610585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.610781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.611075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.611127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.579 [2024-07-14 03:17:49.611304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.611502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.579 [2024-07-14 03:17:49.611532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.579 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.611731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.611951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.611979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.612175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.612383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.612434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.612631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.612816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.612841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.612999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.613231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.613256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.613430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.613761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.613828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.614073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.614248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.614335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.614518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.614714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.614742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.614931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.615100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.615128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.615288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.615533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.615583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.615808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.616234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.616633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.616830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.617044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.617245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.617273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.617467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.617681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.617732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.617962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.618202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.618261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.618483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.618717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.618742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.618946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.619161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.619218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.619389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.619643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.619697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.619900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.620198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.620251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.620426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.620718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.620768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.620943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.621194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.621246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.621446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.621639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.621668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.621840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.622034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.622062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.622282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.622552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.622613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.622813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.623039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.623068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.623240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.623482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.623531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.623725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.623946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.624016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.624248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.624601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.624646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.624871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.625052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.625077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.625301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.625518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.625546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.580 qpair failed and we were unable to recover it. 00:29:54.580 [2024-07-14 03:17:49.625770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.580 [2024-07-14 03:17:49.625923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.625949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.626145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.626447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.626498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.626719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.626916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.626946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.627172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.627364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.627392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.627562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.627739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.627781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.627951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.628113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.628143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.628373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.628550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.628574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.628728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.629038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.629091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.629291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.629538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.629590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.629782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.629984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.630011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.630188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.630391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.630416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.630585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.630781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.630808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.631017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.631275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.631325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.631555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.631748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.631776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.631956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.632137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.632178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.632366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.632666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.632723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.632929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.633129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.633159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.633333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.633535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.633563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.633732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.634323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.634665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.634837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.635023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.635208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.635233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.635438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.635790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.635855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.636090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.636260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.636285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.636439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.636618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.636642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.636878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.637087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.637112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.637270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.637417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.581 [2024-07-14 03:17:49.637442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.581 qpair failed and we were unable to recover it. 00:29:54.581 [2024-07-14 03:17:49.637619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.637771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.637802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.637955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.638108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.638134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.638292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.638575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.638627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.638821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.639033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.639062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.639261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.639586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.639649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.639876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.640290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.640645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.640880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.641085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.641352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.641401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.641617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.641795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.641819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.642029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.642194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.642226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.642428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.642623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.642650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.642818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.643290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.643664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.643909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.644114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.644284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.644308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.644495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.644716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.644744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.644921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.645325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.645707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.645914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.646114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.646440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.646505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.646706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.646910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.646936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.647090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.647273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.647298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.647503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.647688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.647713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.647893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.648089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.648118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.648349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.648530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.648556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.648792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.648972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.649001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.649175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.649477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.649536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.649916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.650087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.650115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.650294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.650500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.650563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.582 qpair failed and we were unable to recover it. 00:29:54.582 [2024-07-14 03:17:49.650758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.582 [2024-07-14 03:17:49.650948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.650980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.651201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.651530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.651586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.651782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.651989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.652015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.652171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.652376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.652401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.652579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.652749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.652774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.652932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.653113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.653138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.653339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.653664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.653724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.653912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.654121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.654146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.654324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.654603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.654657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.654856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.655060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.655087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.655285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.655590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.655645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.655881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.656058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.656087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.656288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.656577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.656628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.656818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.657282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.657743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.657941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.658144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.658321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.658346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.658543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.658707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.658736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.658931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.659099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.659128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.659345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.659614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.659642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.659831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.660259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.660638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.660810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.660988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.661174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.661199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.661352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.661560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.661585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.661804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.662214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.662597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.662816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.663049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.663245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.663273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.663502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.663656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.663680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.583 [2024-07-14 03:17:49.663859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.664072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.583 [2024-07-14 03:17:49.664097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.583 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.664254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.664433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.664458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.664634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.664784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.664808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.664990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.665169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.665194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.665390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.665674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.665723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.665920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.666304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.666634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.666861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.667045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.667222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.667246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.667423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.667594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.667635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.667791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.668279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.668678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.668908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.669075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.669249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.669273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.669418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.669599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.669625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.669797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.670028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.670057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.670280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.670601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.670651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.670886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.671247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.671727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.671924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.672079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.672265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.672290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.672491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.672714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.672741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.672935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.673344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.673724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.673979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.674183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.674358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.674382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.674538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.674693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.674718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.674870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.675076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.675104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.675298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.675584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.675635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.675809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.675983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.676009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.676208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.676415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.676439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.584 qpair failed and we were unable to recover it. 00:29:54.584 [2024-07-14 03:17:49.676619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.584 [2024-07-14 03:17:49.676772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.676797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.676999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.677200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.677225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.677485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.677720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.677745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.677949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.678146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.678175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.678397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.678749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.678811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.679009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.679394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.679764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.679965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.680164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.680474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.680534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.680711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.680953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.680982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.681161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.681337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.681362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.681553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.681748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.681776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.681975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.682151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.682179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.682376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.682669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.682719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.682901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.683310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.683753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.683966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.684132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.684316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.684342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.684494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.684700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.684725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.684898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.685330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.685677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.685848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.686037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.686313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.686368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.686565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.686760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.686787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.686954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.687337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.585 qpair failed and we were unable to recover it. 00:29:54.585 [2024-07-14 03:17:49.687694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.585 [2024-07-14 03:17:49.687903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.688081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.688259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.688283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.688466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.688777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.688828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.689068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.689220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.689245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.689423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.689684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.689733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.689908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.690354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.690767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.690991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.691176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.691355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.691379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.691555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.691705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.691730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.691931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.692085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.692129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.692351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.692696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.692746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.692943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.693147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.693187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.693407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.693582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.693608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.693813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.693991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.694016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.694231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.694375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.694415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.694659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.694882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.694911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.695074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.695271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.695296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.695449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.695630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.695655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.695830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.696192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.696544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.696751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.696930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.697132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.697173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.697393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.697589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.697616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.697837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.698275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.698693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.698936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.699096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.699292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.699320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.699517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.699662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.699687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.699873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.700074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.700103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.700272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.700578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.700629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.586 qpair failed and we were unable to recover it. 00:29:54.586 [2024-07-14 03:17:49.700823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.586 [2024-07-14 03:17:49.701030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.701058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.701262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.701534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.701583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.701811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.701983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.702011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.702202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.702404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.702434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.702638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.702842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.702872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.703053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.703252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.703280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.703569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.703796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.703823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.704006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.704204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.704229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.704428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.704631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.704655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.704832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.705222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.705629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.705841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.706041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.706241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.706265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.706411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.706616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.706678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.706899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.707071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.707099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.707331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.707513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.707537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.707750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.707973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.708001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.708202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.708375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.708417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.708641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.708808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.708835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.709085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.709230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.709255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.709435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.709627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.709654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.709882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.710235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.710653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.710885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.711081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.711314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.711338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.711517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.711695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.711720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.711901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.712058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.712083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.712281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.712582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.712633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.712831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.713231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.587 [2024-07-14 03:17:49.713625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.587 [2024-07-14 03:17:49.713856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.587 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.714037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.714188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.714213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.714396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.714574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.714599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.714801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.714979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.715008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.715167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.715371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.715401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.715596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.715767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.715794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.716041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.716241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.716299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.716470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.716672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.716697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.716906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.717110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.717136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.717340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.717541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.717569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.717761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.718267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.718708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.718929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.719123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.719345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.719400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.719599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.719794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.719820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.720033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.720304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.720354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.720551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.720725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.720752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.720941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.721363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.721788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.721992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.722222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.722503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.722553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.722746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.722938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.722967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.723166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.723372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.723396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.723563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.723759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.723787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.723960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.724161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.724187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.724334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.724535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.724560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.724764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.724973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.725002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.725195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.725364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.725388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.725583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.725804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.725832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.726023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.726280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.726333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.726554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.726720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.726748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.588 qpair failed and we were unable to recover it. 00:29:54.588 [2024-07-14 03:17:49.726970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.588 [2024-07-14 03:17:49.727124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.727148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.727352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.727619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.727668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.727892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.728299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.728698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.728897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.729103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.729277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.729302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.729479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.729676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.729703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.729915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.730295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.730653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.730823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.731012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.731197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.731222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.731446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.731638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.731665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.731828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.732035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.732060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.732220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.732484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.732535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.732728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.732958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.733007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.733216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.733436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.733498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.733699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.733876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.733901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.734145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.734348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.734373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.734543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.734687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.734713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.734890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.735302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.735702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.735899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.736120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.736295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.736323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.736522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.736707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.736735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.736930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.737340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.589 qpair failed and we were unable to recover it. 00:29:54.589 [2024-07-14 03:17:49.737748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.589 [2024-07-14 03:17:49.737978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.738203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.738585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.738648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.738852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.739295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.739689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.739944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.740117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.740318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.740343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.740522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.740697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.740721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.740924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.741349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.741738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.741919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.742099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.742281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.742305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.742484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.742704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.742732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.742901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.743304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.743779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.743982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.744142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.744317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.744342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.744522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.744699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.744724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.744932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.745314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.745771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.745985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.746181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.746429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.746476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.746709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.746895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.746923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.747093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.747289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.747317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.747540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.747736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.747761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.747915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.748245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.748595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.748842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.749057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.749250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.749277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.749477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.749690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.749715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.590 [2024-07-14 03:17:49.749918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.750073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.590 [2024-07-14 03:17:49.750098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.590 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.750276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.750531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.750578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.750776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.750966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.750995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.751165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.751345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.751370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.751547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.751826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.751854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.752096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.752275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.752300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.752505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.752825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.752884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.753110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.753312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.753339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.753515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.753699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.753723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.753906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.754354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.754714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.754919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.755117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.755390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.755445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.755645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.755818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.755842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.756034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.756214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.756239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.756418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.756672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.756730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.756933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.757139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.757164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.757367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.757626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.757679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.757898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.758124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.758150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.758396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.758595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.758620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.758828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.759268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.759664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.759872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.760076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.760297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.760321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.760550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.760736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.760764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.760962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.761140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.761165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.761345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.761614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.761664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.761860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.762091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.762119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.762289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.762537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.762587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.762814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.763037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.763066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.763291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.763637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.763687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.591 qpair failed and we were unable to recover it. 00:29:54.591 [2024-07-14 03:17:49.763958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.764158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.591 [2024-07-14 03:17:49.764186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.764357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.764561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.764585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.764789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.764980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.765009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.765201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.765425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.765476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.765667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.765874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.765900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.766052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.766304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.766352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.766549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.766736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.766761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.766940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.767118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.767144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.767297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.767547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.767599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.767815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.768225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.768592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.768812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.769015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.769193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.769218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.769434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.769642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.769684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.769911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.770146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.770171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.770372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.770596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.770648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.770812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.771237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.771754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.771963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.772139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.772347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.772373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.772527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.772710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.772736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.772961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.773166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.773191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.773374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.773579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.773604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.773800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.773992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.774020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.774191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.774377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.774405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.774623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.774838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.774863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.775052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.775307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.775358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.775585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.775780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.775812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.776038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.776240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.776268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.776465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.776659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.776686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.776911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.777113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.777138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.592 qpair failed and we were unable to recover it. 00:29:54.592 [2024-07-14 03:17:49.777365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.592 [2024-07-14 03:17:49.777646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.777698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.777896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.778312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.778737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.778969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.779122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.779274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.779298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.779500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.779716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.779742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.779923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.780314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.780653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.780853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.781077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.781288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.781313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.781459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.781613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.781639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.781813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.781992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.782018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.782202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.782421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.782447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.782625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.782781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.782808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.782984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.783157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.783183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.783373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.783692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.783755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.783953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.784112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.784141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.784339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.784621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.784670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.784861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.785314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.785712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.785942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.786140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.786313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.786341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.786545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.786747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.786772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.786926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.787375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.787727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.787930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.788111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.788287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.788312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.788529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.788702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.788727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.788889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.789237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.789719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.593 [2024-07-14 03:17:49.789946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.593 qpair failed and we were unable to recover it. 00:29:54.593 [2024-07-14 03:17:49.790114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.790288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.790313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.790507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.790685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.790709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.790891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.791089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.791118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.791901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.792373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.792762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.792990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.793174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.793329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.793355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.793520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.793673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.793698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.793902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.794323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.794732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.794945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.795126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.795317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.795342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.795548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.795717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.795745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.795972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.796122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.796146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.796301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.796458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.796485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.797024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.797419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.797786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.797990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.798171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.798360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.798386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.798565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.798751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.798777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.798935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.799287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.799663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.799888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.800053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.800209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.800234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.800551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.800741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.800768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.800972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.801147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.801171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.594 qpair failed and we were unable to recover it. 00:29:54.594 [2024-07-14 03:17:49.801355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.594 [2024-07-14 03:17:49.801536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.801561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.801777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.801931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.801958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.802135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.802311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.802337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.802560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.802787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.802813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.802981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.803340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.803751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.803928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.804078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.804258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.804298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.804539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.804734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.804759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.804969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.805120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.805146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.805421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.805641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.805682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.805879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.806268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.806641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.806841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.807034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.807205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.807233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.807429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.807598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.807623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.807822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.807983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.808008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.808212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.808436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.808476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.808724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.808909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.808936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.809088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.809265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.809291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.809471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.809616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.809641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.809823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.809982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.810008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.810163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.810376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.810418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.810686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.810878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.810920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.811146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.811439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.811489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.811693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.811845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.595 [2024-07-14 03:17:49.811878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.595 qpair failed and we were unable to recover it. 00:29:54.595 [2024-07-14 03:17:49.812095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.812336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.812364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.812543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.812723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.812749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.812919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.813281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.813644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.813845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.814030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.814186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.814211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.814436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.814655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.814684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.814850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.815254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.815614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.815812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.815963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.816141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.816170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.816365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.816638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.816683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.816894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.817336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.817744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.817929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.818128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.818335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.818360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.818534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.818716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.818741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.818935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.819110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.819139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.819360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.819660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.819717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.819887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.820346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.820683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.820898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.821070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.821295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.821323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.821525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.821703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.821729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.821932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.822138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.822164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.822337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.822538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.822562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.822792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.822995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.823023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.823221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.823448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.823500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.871 qpair failed and we were unable to recover it. 00:29:54.871 [2024-07-14 03:17:49.823730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.871 [2024-07-14 03:17:49.823920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.823949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.824173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.824469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.824519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.824720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.824880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.824910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.825109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.825333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.825386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.825611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.825806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.825833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.826037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.826261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.826319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.826542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.826767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.826796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.827021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.827242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.827270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.827471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.827719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.827770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.827971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.828263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.828314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.828503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.828754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.828801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.828978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.829174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.829203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.829405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.829579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.829605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.829781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.829993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.830022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.830215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.830462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.830513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.830734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.830934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.830962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.831134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.831332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.831374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.831594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.831772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.831797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.832000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.832239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.832287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.832480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.832676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.832704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.832876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.833086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.833114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.833303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.833554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.833603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.833835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.834038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.834067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.834290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.834566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.834612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.834818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.835057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.835086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.835273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.835490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.835540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.835761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.835997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.836026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.836243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.836507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.836557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.836777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.836979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.837006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.837201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.837438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.837490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.872 [2024-07-14 03:17:49.837688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.837883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.872 [2024-07-14 03:17:49.837912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.872 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.838122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.838273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.838316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.838513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.838725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.838777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.838971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.839291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.839341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.839541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.839732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.839761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.839958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.840216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.840268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.840468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.840642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.840675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.840840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.841266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.841646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.841879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.842067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.842311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.842360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.842585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.842753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.842780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.842944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.843173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.843199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.843405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.843725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.843782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.843974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.844194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.844222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.844420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.844707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.844758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.844960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.845133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.845162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.845334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.845530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.845557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.845759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.845997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.846052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.846226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.846404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.846433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.846656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.846829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.846856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.847078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.847235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.847263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.847441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.847642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.847666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.847878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.848283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.848655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.848910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.849097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.849328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.849361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.849564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.849751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.849778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.849999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.850187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.850224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.850421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.850596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.850637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.850825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.850997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.851051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.873 qpair failed and we were unable to recover it. 00:29:54.873 [2024-07-14 03:17:49.851277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.873 [2024-07-14 03:17:49.851546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.851599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.851821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.852249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.852752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.852958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.853158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.853321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.853349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.853515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.853710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.853746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.853920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.854149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.854174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.854352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.854624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.854672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.854896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.855096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.855121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.855344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.855630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.855680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.855886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.856065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.856090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.856299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.856580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.856639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.856862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.857292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.857711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.857907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.858084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.858288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.858338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.858542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.858749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.858800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.859039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.859197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.859223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.859427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.859708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.859761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.859968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.860163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.860190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.860361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.860560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.860588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.860782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.861267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.861741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.861990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.862231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.862429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.862453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.862603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.862758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.862784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.863024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.863203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.863227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.863408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.863589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.863615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.863820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.864060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.864086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.864257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.864478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.864532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.864727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.864965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.874 [2024-07-14 03:17:49.865016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.874 qpair failed and we were unable to recover it. 00:29:54.874 [2024-07-14 03:17:49.865241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.865483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.865531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.865732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.865915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.865941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.866150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.866474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.866527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.866755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.867073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.867125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.867331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.867633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.867692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.867925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.868213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.868262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.868470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.868741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.868791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.868988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.869187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.869215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.869416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.869731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.869789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.869984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.870206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.870231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.870402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.870711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.870761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.870957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.871190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.871251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.871458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.871610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.871635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.871839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.872237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.872710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.872937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.873089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.873282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.873309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.873512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.873731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.873759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.873957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.874135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.874163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.874331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.874621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.874671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.874894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.875254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.875779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.875992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.876203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.876431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.876482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.875 qpair failed and we were unable to recover it. 00:29:54.875 [2024-07-14 03:17:49.876691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.876891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.875 [2024-07-14 03:17:49.876920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.877125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.877336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.877386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.877613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.877804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.877832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.878048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.878192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.878217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.878398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.878665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.878715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.878942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.879120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.879147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.879325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.879549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.879577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.879784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.880225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.880629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.880891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.881096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.881375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.881425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.881628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.881816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.881843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.882039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.882276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.882325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.882552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.882741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.882769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.882955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.883177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.883205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.883376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.883605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.883657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.883823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.884243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.884653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.884851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.885083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.885331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.885386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.885587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.885809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.885837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.886068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.886426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.886486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.886699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.886894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.886922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.887110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.887375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.887426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.887643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.887829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.887857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.888084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.888274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.888299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.888510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.888692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.888717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.888871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.889052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.889077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.889224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.889399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.889424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.889960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.890170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.890196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.890351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.890530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.876 [2024-07-14 03:17:49.890556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.876 qpair failed and we were unable to recover it. 00:29:54.876 [2024-07-14 03:17:49.890744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.890941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.890967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.891122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.891314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.891343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.891541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.891735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.891764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.891934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.892154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.892183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.892389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.892672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.892722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.892938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.893167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.893196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.893398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.893674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.893724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.893956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.894133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.894159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.894334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.894535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.894597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.894847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.895240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.895646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.895855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.896041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.896345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.896405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.896607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.896757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.896783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.896964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.897322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.897709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.897938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.898113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.898289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.898314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.898492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.898664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.898689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.898872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.899243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.899612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.899838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.900048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.900229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.900254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.900403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.900617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.900643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.900805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.900987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.901015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.901191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.901368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.901393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.901602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.901785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.901811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.901994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.902372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.902781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.902981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.877 [2024-07-14 03:17:49.903167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.903353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.877 [2024-07-14 03:17:49.903379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.877 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.903561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.903727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.903754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.903917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.904337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.904694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.904904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.905097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.905267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.905295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.905495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.905798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.905848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.906058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.906259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.906285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.906442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.906641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.906666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.906874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.907224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.907632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.907837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.908029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.908412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.908792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.908962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.909143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.909369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.909398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.909588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.909758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.909788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.909979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.910179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.910207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.910382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.910587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.910615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.910821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.911244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.911599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.911789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.911995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.912186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.912213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.912386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.912570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.912597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.912798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.912998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.913026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.913252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.913406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.913431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.913580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.913797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.913825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.914023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.914196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.914229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.914456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.914643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.914668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.914876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.915050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.915077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.915330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.915610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.915666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.878 qpair failed and we were unable to recover it. 00:29:54.878 [2024-07-14 03:17:49.915843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.878 [2024-07-14 03:17:49.916000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.916026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.916178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.916357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.916383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.916564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.916797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.916822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.916996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.917147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.917188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.917416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.917624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.917652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.917850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.918302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.918801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.918988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.919164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.919341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.919366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.919560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.919750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.919778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.919999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.920150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.920191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.920389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.920630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.920658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.920831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.921226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.921677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.921848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.922037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.922238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.922284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.922484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.922657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.922703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.922882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.923293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.923729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.923962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.924131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.924321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.924348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.924539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.924716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.924743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.924975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.925306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.925693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.925949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.926148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.926339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.926364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.879 qpair failed and we were unable to recover it. 00:29:54.879 [2024-07-14 03:17:49.926553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.926728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.879 [2024-07-14 03:17:49.926755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.926949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.927305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.927741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.927962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.928118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.928305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.928330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.928554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.928715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.928742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.928942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.929101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.929130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.929328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.929575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.929621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.929812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.930198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.930692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.930860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.931035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.931213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.931245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.931457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.931719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.931767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.931960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.932176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.932227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.932427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.932629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.932674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.932877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.933265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.933762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.933981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.934165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.934376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.934423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.934655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.934889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.934917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.935116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.935367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.935412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.935610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.935801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.935829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.936035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.936243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.936271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.936493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.936670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.936697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.936920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.937348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.937703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.937921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.938127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.938307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.938351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.938595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.938759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.938787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.938952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.939164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.939196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.939380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.939581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.939632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.880 qpair failed and we were unable to recover it. 00:29:54.880 [2024-07-14 03:17:49.939829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.940006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.880 [2024-07-14 03:17:49.940034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.940221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.940472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.940520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.940711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.940880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.940908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.941078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.941268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.941319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.941548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.941742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.941787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.942028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.942181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.942223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.942396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.942652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.942701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.942913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.943323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.943705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.943922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.944118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.944335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.944360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.944512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.944691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.944716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.944917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.945327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.945789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.945989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.946190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.946391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.946416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.946593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.946733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.946773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.946982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.947134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.947177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.947396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.947624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.947649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.947822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.948225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.948662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.948863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.949084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.949239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.949264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.949436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.949611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.949636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.949843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.950285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.950742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.950987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.951168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.951348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.951373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.951547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.951728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.951753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.951926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.952260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.881 qpair failed and we were unable to recover it. 00:29:54.881 [2024-07-14 03:17:49.952640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.881 [2024-07-14 03:17:49.952839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.953072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.953282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.953309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.953542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.953729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.953757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.953944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.954199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.954243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.954464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.954655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.954683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.954895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.955351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.955717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.955921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.956105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.956266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.956291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.956463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.956701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.956726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.956915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.957345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.957721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.957954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.958124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.958329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.958360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.958557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.958769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.958794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.959000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.959191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.959220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.959417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.959653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.959699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.959878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.960271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.960772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.960968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.961145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.961327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.961354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.961563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.961740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.961765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.961964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.962161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.962198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.962393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.962614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.962641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.962818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.962986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.963012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.963174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.963373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.963400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.963598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.963791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.963819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.964024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.964266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.964323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.964522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.964673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.964713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.964914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.965080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.965105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.965289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.965483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.965528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.882 qpair failed and we were unable to recover it. 00:29:54.882 [2024-07-14 03:17:49.965725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.882 [2024-07-14 03:17:49.965982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.966010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.966216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.966472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.966520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.966686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.966861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.966891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.967099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.967296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.967321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.967499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.967694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.967746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.967967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.968186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.968237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.968412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.968621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.968665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.968830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.969060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.969085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.969259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.969531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.969583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.969807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.970266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.970705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.970924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.971147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.971332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.971359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.971543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.971725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.971749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.971923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.972114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.972141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.972332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.972565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.972612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.972768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.972987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.973018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.973186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.973364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.973389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.973566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.973739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.973766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.974013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.974220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.974245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.974449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.974645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.974673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.974875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.975066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.975093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.975293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.975537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.975584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.975781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.976186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.976618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.976845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.977056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.977258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.977290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.977503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.977655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.977680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.883 qpair failed and we were unable to recover it. 00:29:54.883 [2024-07-14 03:17:49.977878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.978054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.883 [2024-07-14 03:17:49.978079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.978256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.978426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.978451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.978620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.978836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.978863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.979085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.979274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.979301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.979491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.979671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.979697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.979907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.980148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.980183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.980367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.980582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.980627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.980851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.981281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.981718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.981945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.982178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.982347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.982371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.982569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.982733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.982761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.982959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.983177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.983228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.983425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.983646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.983695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.983908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.984086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.984128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.984349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.984545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.984570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.984757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.984983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.985011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.985221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.985400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.985425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.985632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.985806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.985833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.986067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.986270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.986298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.986491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.986687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.986712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.986889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.987296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.987720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.987984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.988180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.988366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.988393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.988565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.988744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.988779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.988960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.989154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.989182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.989404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.989575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.989603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.989831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.990043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.990071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.884 [2024-07-14 03:17:49.990258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.990459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.884 [2024-07-14 03:17:49.990484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.884 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.990652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.990820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.990848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.991051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.991250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.991278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.991471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.991636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.991663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.991845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.992223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.992665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.992862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.993076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.993273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.993301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.993452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.993647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.993672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.993851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.994323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.994775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.994994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.995166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.995366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.995393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.995586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.995764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.995808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.996020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.996235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.996262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.996434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.996634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.996658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.996812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.997292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.997674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.997928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.998129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.998346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.998373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.998571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.998768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.998793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.998984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.999180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.999208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.999398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.999628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:49.999655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:49.999855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.000072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.000097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.000929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.001134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.001163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.001395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.001574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.001599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.001798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.001995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.002022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.002202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.002367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.002401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.002573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.002795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.002823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.003037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.003191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.003217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.003396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.003632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.003659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.885 [2024-07-14 03:17:50.003858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.004036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.885 [2024-07-14 03:17:50.004061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.885 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.004212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.004361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.004386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.004540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.004708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.004733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.004917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.005256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.005638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.005879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.006027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.006391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.006787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.006995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.007172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.007339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.007364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.007592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.007792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.007817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.008008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.008176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.008208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.008416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.008590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.008621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.008815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.009241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.009652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.009880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.010042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.010409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.010735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.010951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.011105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.011286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.011311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.011506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.011714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.011740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.011942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.012255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.012623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.012825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.013001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.013147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.013172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.013354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.013631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.013656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.013838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.013994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.014020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.014197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.014380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.014406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.014557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.014694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.014718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.014918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.015320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.886 [2024-07-14 03:17:50.015713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.886 [2024-07-14 03:17:50.015934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.886 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.016086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.016410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.016784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.016967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.017113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.017264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.017289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.017440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.017624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.017649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.017822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.018284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.018615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.018793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.018954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.019306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.019666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.019859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.020029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.020341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.020721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.020892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.021057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.021392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.021731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.021900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.022067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.022220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.022245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.022457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.022638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.022663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.022822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.022977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.023003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.023152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.023332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.023357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.023540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.023743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.023768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.023929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.024256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.024622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.024788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.024990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.025345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.025726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.025901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.026081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.026292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.026317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.026524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.026698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.026723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.887 [2024-07-14 03:17:50.026953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.027138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.887 [2024-07-14 03:17:50.027164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.887 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.027342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.027524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.027549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.027722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.027901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.027927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.028069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.028431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.028776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.028977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.029161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.029314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.029339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.029514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.029662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.029687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.029872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.030269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.030617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.030815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.030973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.031326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.031652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.031821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.032009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.032188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.032213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.032422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.032572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.032596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.032806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.032997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.033023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.033222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.033370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.033395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.033552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.033709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.033734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.033896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.034271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.034674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.034883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.035032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.035380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.035730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.035938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.036081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.036241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.036266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.036470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.036664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.036693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.888 qpair failed and we were unable to recover it. 00:29:54.888 [2024-07-14 03:17:50.036878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.888 [2024-07-14 03:17:50.037084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.037110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.037293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.037497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.037523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.037729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.037892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.037927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.038081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.038271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.038297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.038482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.038658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.038683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.038833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.038998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.039024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.039205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.039365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.039390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.039552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.039704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.039728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.039953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.040353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.040722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.040923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.041087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.041271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.041297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.041503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.041680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.041705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.041875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.042275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.042657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.042892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.043079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.043251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.043277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.043485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.043637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.043663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.043812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.043996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.044022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.044236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.044458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.044483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.044660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.044824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.044859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.045063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.045426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.045744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.045979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.046157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.046376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.046401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.046578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.046756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.046783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.046966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.047338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.047746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.047955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.048132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.048318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.048343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.889 qpair failed and we were unable to recover it. 00:29:54.889 [2024-07-14 03:17:50.048497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.889 [2024-07-14 03:17:50.048647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.048674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.048903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.049244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.049604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.049778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.049957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.050328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.050730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.050938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.051115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.051381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.051406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.051566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.051714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.051741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.051950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.052346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.052721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.052926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.053113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.053267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.053292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.053496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.053682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.053707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.053858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.054237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.054590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.054764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.054943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.055318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.055641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.055885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.056079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.056229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.056256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.056457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.056665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.056690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.056876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.057199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.057578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.057848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.058065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.058249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.058275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.058453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.058631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.058670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.058823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.059261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.059642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.059824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.890 [2024-07-14 03:17:50.059994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.060191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.890 [2024-07-14 03:17:50.060218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.890 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.060415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.060610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.060635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.060816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.060973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.060999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.061175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.061359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.061384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.061573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.061746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.061771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.061945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.062323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.062659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.062863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.063082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.063257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.063282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.063458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.063611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.063640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.063819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.064187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.064548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.064788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.064961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.065298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.065670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.065874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.066054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.066211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.066236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.066464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.066637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.066662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.066837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.067201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.067589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.067761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.067909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.068269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.068626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.068799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.068985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.069344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.069725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.069946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.070102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.070279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.070306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.070478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.070632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.070657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.070882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.071059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.071085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.071274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.071430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.071457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.891 qpair failed and we were unable to recover it. 00:29:54.891 [2024-07-14 03:17:50.071641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.891 [2024-07-14 03:17:50.071795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.071820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.071999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.072408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.072789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.072990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.073148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.073295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.073321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.073502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.073682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.073708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.073879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.074260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.074613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.074784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.074968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.075319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.075704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.075887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.076064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.076409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.076761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.076966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.077109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.077294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.077319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.077499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.077707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.077733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.077944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.078297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.078735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.078924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.079105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.079282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.079307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.079516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.079692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.079717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.079897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.080048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.080073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.080217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.080368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.080394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.892 qpair failed and we were unable to recover it. 00:29:54.892 [2024-07-14 03:17:50.080602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.892 [2024-07-14 03:17:50.080782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.080808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.080991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.081377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.081783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.081981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.082179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.082355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.082381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.082566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.082717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.082742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.082922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.083286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.083642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.083877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.084048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.084430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.084778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.084949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.085098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.085299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.085324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.085503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.085681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.085707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.085915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.086239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.086620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.086826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.086983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.087362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.087795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.087990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.088170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.088377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.088402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.088552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.088755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.088780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.088937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.089340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.089715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.089914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.090068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.090266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.090291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.090474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.090648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.090673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.090853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.091246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.091575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.091780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.091984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.092133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.893 [2024-07-14 03:17:50.092159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.893 qpair failed and we were unable to recover it. 00:29:54.893 [2024-07-14 03:17:50.092340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.092520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.092545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.092700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.092851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.092881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.093093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.093445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.093797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.093978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.094132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.094280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.094305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.094481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.094662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.094687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.094872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.095259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.095612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.095790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.095966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.096327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.096678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.096848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.097041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.097219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.097244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.097394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.097573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.097598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.097780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.097990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.098016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.098194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.098399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.098424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.098605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.098785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.098810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.098958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.099318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.099719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.099924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.100109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.100285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.100309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.100486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.100686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.100711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.100868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.101223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.101609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.101809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.101987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.102345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.102732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.102946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.103091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.103299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.103334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.894 qpair failed and we were unable to recover it. 00:29:54.894 [2024-07-14 03:17:50.103566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.894 [2024-07-14 03:17:50.103749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.103775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.103924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.104287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.104705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.104900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.105067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.105262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.105292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.105469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.105667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.105695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.105851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.106256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.106716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.106923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.107081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.107288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:54.895 [2024-07-14 03:17:50.107324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:54.895 qpair failed and we were unable to recover it. 00:29:54.895 [2024-07-14 03:17:50.107527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.107716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.107742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.166 qpair failed and we were unable to recover it. 00:29:55.166 [2024-07-14 03:17:50.107916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.108113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.108139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.166 qpair failed and we were unable to recover it. 00:29:55.166 [2024-07-14 03:17:50.108304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.108476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.166 [2024-07-14 03:17:50.108501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.166 qpair failed and we were unable to recover it. 00:29:55.166 [2024-07-14 03:17:50.108684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.108862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.108895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.109057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.109236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.109261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.109443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.109594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.109621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.109797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.110227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.110703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.110909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.111209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.111475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.111505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.111708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.111939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.111965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.112111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.112294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.112319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.112513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.112794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.112845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.113055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.113292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.113318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.113492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.113677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.113703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.113921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.114358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.114773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.114968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.115205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.115407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.115433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.115624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.115790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.115818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.116030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.116201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.116227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.116399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.116627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.116653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.116847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.117268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.117654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.117894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.118100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.118347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.118373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.118581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.118800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.118828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.119020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.119286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.119337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.119535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.119761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.119790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.119997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.120157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.120183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.120410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.120724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.120775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.120997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.121146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.121189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.121361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.121561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.121589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.167 qpair failed and we were unable to recover it. 00:29:55.167 [2024-07-14 03:17:50.121811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.122041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.167 [2024-07-14 03:17:50.122070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.122268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.122469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.122500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.122713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.122917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.122946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.123115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.123389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.123443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.123753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.123984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.124011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.124207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.124465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.124517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.124688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.124917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.124946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.125140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.125451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.125504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.125811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.126037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.126065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.126260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.126582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.126643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.126840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.127060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.127088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.127289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.127615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.127669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.127874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.128104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.128129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.128330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.128573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.128623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.128854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.129242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.129678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.129883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.130106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.130408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.130468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.130641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.130863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.130900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.131133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.131334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.131400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.131678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.131861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.131909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.132114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.132442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.132494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.132727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.132926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.132955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.133153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.133458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.133513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.133859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.134330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.134750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.134961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.135163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.135428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.135478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.135784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.136000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.136030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.136230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.136508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.136552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.136814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.137023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.168 [2024-07-14 03:17:50.137049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.168 qpair failed and we were unable to recover it. 00:29:55.168 [2024-07-14 03:17:50.137650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.137884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.137914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.138126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.138661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.138691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.138906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.139314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.139695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.139908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.140068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.140261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.140292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.140524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.140672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.140697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.140889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.141323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.141726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.141936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.142159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.142335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.142361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.142512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.142719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.142744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.142949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.143314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.143692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.143876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.144043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.144226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.144251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.144411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.144591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.144619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.144818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.145048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.145078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.145251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.145562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.145615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.145837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.146231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.146640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.146907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.147113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.147266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.147291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.147472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.147625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.147652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.147805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.148191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.148537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.148734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.148924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.149304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.149653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.149912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.169 qpair failed and we were unable to recover it. 00:29:55.169 [2024-07-14 03:17:50.150110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.150383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.169 [2024-07-14 03:17:50.150443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.150648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.150830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.150878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.151068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.151248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.151274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.151484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.151675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.151703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.151890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.152302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.152666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.152877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.153058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.153321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.153371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.153566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.153762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.153789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.153984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.154351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.154720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.154923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.155118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.155357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.155382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.155556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.155732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.155757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.155914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.156117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.156142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.156325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.156555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.156583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.156808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.156977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.157006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.157178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.157373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.157399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.157577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.157763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.157788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.157943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.158314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.158664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.158864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.159019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.159395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.159795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.159987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.160150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.160352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.160376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.160555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.160755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.160780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.160971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.161128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.161156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.161356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.161671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.161724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.161944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.162098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.162125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.162360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.162589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.162638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.170 qpair failed and we were unable to recover it. 00:29:55.170 [2024-07-14 03:17:50.162849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.170 [2024-07-14 03:17:50.163049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.163077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.163243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.163414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.163442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.163637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.163842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.163874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.164035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.164213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.164238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.164414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.164586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.164611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.164776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.164974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.165002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.165202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.165379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.165404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.165606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.165784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.165808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.165998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.166404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.166750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.166936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.167107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.167250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.167275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.167460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.167694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.167721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.167932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.168290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.168698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.168878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.169031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.169415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.169764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.169944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.170124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.170271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.170295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.170473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.170658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.170683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.170885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.171300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.171654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.171890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.171 qpair failed and we were unable to recover it. 00:29:55.171 [2024-07-14 03:17:50.172072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.171 [2024-07-14 03:17:50.172244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.172269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.172415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.172568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.172593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.172821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.173235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.173601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.173770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.173973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.174465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.174825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.174998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.175181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.175406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.175475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.175665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.175854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.175891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.176110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.176289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.176315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.176495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.176669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.176695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.176877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.177236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.177592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.177790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.177950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.178316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.178672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.178885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.179064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.179239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.179264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.179419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.179590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.179632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.179837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.180239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.180619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.180798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.180979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.181160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.181185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.181362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.181561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.181586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.181785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.182244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.182653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.182853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.183033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.183210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.183235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.183434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.183612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.183637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.183847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.184013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.172 [2024-07-14 03:17:50.184038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.172 qpair failed and we were unable to recover it. 00:29:55.172 [2024-07-14 03:17:50.184187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.184364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.184389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.184563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.184749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.184774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.184953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.185310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.185715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.185899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.186081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.186358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.186413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.186641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.186861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.186896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.187101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.187367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.187418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.187614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.187822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.187847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.188055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.188216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.188240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.188425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.188572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.188598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.188810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.188984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.189009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.189189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.189440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.189489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.189672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.189853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.189886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.190039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.190413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.190767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.190971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.191179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.191333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.191359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.191583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.191808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.191836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.192059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.192205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.192232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.192412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.192591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.192617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.192817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.192984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.193009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.193187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.193335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.193360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.193563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.193763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.193791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.193984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.194238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.194294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.194598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.194810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.194842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.195066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.195217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.195243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.195425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.195625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.195650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.195803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.195982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.196008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.196189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.196398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.196424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.173 [2024-07-14 03:17:50.196575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.196752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.173 [2024-07-14 03:17:50.196777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.173 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.196980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.197411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.197771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.197999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.198156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.198303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.198330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.198481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.198653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.198678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.198893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.199281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.199656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.199893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.200074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.200247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.200272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.200450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.200651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.200676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.200838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.201309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.201660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.201839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.202054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.202258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.202283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.202453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.202598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.202623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.202802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.203216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.203570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.203775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.203923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.204296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.204703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.204886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.205029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.205208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.205232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.205461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.205630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.205656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.205838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.206227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.206557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.206760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.206964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.207117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.207142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.207345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.207554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.207611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.207815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.207990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.208015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.208191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.208394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.208419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.174 qpair failed and we were unable to recover it. 00:29:55.174 [2024-07-14 03:17:50.208599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.174 [2024-07-14 03:17:50.208743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.208769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.208945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.209353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.209673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.209917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.210126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.210302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.210328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.210538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.210706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.210731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.210878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.211282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.211678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.211905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.212097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.212258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.212286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.212484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.212661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.212687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.212834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.213015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.213044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.213332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.213691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.213746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.213955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.214291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.214673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.214903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.215081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.215338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.215366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.215560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.215736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.215761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.215941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.216353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.216707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.216891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.217045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.217226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.217252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.217455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.217638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.217663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.217851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.218241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.218606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.218809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.175 [2024-07-14 03:17:50.218985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.219170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.175 [2024-07-14 03:17:50.219195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.175 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.219369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.219543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.219568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.219796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.220203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.220679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.220924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.221077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.221279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.221307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.221630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.221892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.221921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.222122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.222289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.222317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.222520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.222719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.222744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.222919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.223139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.223163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.223396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.223720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.223774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.224013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.224387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.224763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.224964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.225220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.225519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.225570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.225792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.226217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.226628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.226849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.227046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.227249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.227298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.227533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.227725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.227753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.227926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.228309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.228696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.228917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.229114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.229317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.229343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.229498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.229699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.229727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.229941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.230163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.230192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.230455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.230723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.230773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.230973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.231270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.231329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.231502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.231709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.231759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.231961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.232163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.232189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.232410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.232634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.232659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.176 qpair failed and we were unable to recover it. 00:29:55.176 [2024-07-14 03:17:50.232839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.233049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.176 [2024-07-14 03:17:50.233078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.233309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.233600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.233654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.233888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.234320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.234732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.234939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.235116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.235306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.235392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.235622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.235850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.235886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.236080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.236366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.236424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.236617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.236815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.236843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.237042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.237275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.237325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.237516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.237684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.237713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.237921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.238116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.238143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.238347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.238545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.238603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.238803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.239266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.239641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.239875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.240039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.240253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.240305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.240538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.240693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.240717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.240905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.241099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.241127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.241488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.241713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.241741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.241969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.242162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.242223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.242445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.242702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.242756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.242962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.243165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.243192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.243393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.243614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.243642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.243812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.244256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.244762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.244968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.245163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.245404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.245432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.245627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.245792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.245821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.246029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.246249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.246298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.246524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.246723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.177 [2024-07-14 03:17:50.246751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.177 qpair failed and we were unable to recover it. 00:29:55.177 [2024-07-14 03:17:50.246919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.247116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.247141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.247334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.247632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.247662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.247883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.248311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.248672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.248904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.249124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.249381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.249405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.249616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.249786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.249813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.250040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.250197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.250226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.250430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.250708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.250734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.250903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.251102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.251128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.251300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.251485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.251538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.251771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.251998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.252027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.252259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.252566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.252627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.252852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.253297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.253726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.253930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.254135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.254508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.254553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.254765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.255259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.255695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.255946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.256291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.256664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.256716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.256943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.257166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.257229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.257455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.257762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.257815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.258032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.258343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.258408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.258669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.258895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.258921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.259149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.259420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.259466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.259675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.259872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.259901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.260075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.260392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.260446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.260810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.261273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.178 [2024-07-14 03:17:50.261671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.178 [2024-07-14 03:17:50.261924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.178 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.262154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.262330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.262355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.262558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.262757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.262785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.262975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.263254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.263304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.263545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.263722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.263751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.263928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.264180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.264231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.264555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.264777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.264805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.264985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.265175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.265209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.265387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.265611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.265662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.265832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.266039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.266068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.266327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.266702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.266760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.266985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.267398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.267757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.267983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.268205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.268439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.268464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.268695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.268931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.268959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.269157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.269453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.269514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.269746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.269900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.269926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.270105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.270467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.270514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.270891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.271145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.271173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.271397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.271652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.271708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.271925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.272109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.272134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.272438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.272658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.272686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.272908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.273108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.273136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.273330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.273645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.273705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.273910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.274110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.274135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.274386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.274757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.274812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.179 qpair failed and we were unable to recover it. 00:29:55.179 [2024-07-14 03:17:50.275020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.179 [2024-07-14 03:17:50.275294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.275343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.275549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.275914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.275969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.276169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.276426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.276479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.276880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.277125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.277153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.277349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.277680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.277742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.277951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.278149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.278176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.278348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.278638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.278700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.278922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.279338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.279731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.279952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.280125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.280371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.280423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.280787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.281256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.281673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.281932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.282131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.282329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.282356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.282563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.282759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.282787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.282979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.283236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.283293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.283470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.283622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.283662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.283824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.284267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.284686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.284942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.285120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.285296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.285322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.285500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.285644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.285684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.285906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.286137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.286165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.286399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.286592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.286621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.286849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.287062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.287090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.287274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.287478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.287505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.287773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.288225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.288679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.288906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.289106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.289303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.180 [2024-07-14 03:17:50.289331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.180 qpair failed and we were unable to recover it. 00:29:55.180 [2024-07-14 03:17:50.289592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.289806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.289836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.290045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.290278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.290339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.290569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.290752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.290777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.290974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.291171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.291199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.291528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.291783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.291808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.291980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.292179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.292209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.292412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.292673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.292727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.292923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.293188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.293240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.293448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.293650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.293678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.293880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.294286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.294654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.294909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.295078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.295300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.295325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.295516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.295849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.295931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.296156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.296374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.296427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.296647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.296859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.296891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.297088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.297333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.297383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.297602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.297795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.297823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.298058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.298216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.298241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.298422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.298707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.298767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.298980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.299174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.299204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.299435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.299611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.299637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.299840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.300300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.300683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.300912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.301135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.301344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.301370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.301524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.301742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.301770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.301990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.302283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.302343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.302703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.302956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.302984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.303177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.303364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.303389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.181 [2024-07-14 03:17:50.303598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.303807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.181 [2024-07-14 03:17:50.303835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.181 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.304047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.304244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.304272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.304436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.304761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.304811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.305041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.305393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.305442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.305663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.305838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.305874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.306051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.306280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.306305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.306488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.306665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.306690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.306873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.307082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.307111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.307307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.307551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.307601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.307832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.308026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.308055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.308274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.308595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.308650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.308877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.309305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.309638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.309835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.310002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.310204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.310233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.310428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.310740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.310791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.310970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.311143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.311171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.311368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.311570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.311598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.311769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.311971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.312000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.312163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.312441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.312492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.312715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.312964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.312993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.313162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.313383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.313411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.313634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.313834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.313862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.314073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.314386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.314440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.314659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.314835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.314863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.315098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.315355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.315383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.315583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.315805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.315830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.316051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.316249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.316277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.316471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.316625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.316650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.316914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.317250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.317301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.317622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.317820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.317848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.182 qpair failed and we were unable to recover it. 00:29:55.182 [2024-07-14 03:17:50.318076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.318239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.182 [2024-07-14 03:17:50.318269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.318490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.318816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.318884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.319118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.319309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.319337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.319508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.319702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.319730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.319914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.320114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.320142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.320340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.320556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.320581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.320810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.320990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.321016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.321174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.321323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.321348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.321493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.321685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.321713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.321963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.322197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.322263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.322463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.322653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.322681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.322878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.323320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.323704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.323941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.324137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.324435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.324488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.324716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.324967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.324992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.325296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.325579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.325604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.325806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.325961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.325987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.326193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.326545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.326599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.326821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.327025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.327058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.327268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.327546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.327606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.327829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.328048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.328077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.328253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.328432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.328473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.328811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.329240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.329758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.329955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.330154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.330493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.330542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.330771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.330971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.330998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.331198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.331388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.331414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.331569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.331766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.331799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.183 qpair failed and we were unable to recover it. 00:29:55.183 [2024-07-14 03:17:50.331973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.332199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.183 [2024-07-14 03:17:50.332257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.332581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.332806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.332833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.333035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.333246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.333297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.333492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.333679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.333707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.333904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.334096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.334124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.334460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.334759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.334787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.335010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.335205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.335230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.335426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.335708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.335757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.335955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.336158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.336185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.336519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.336768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.336803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.336977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.337147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.337175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.337377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.337630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.337682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.337896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.338135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.338183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.338434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.338697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.338722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.338951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.339331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.339717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.339950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.340105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.340285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.340311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.340524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.340742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.340770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.340947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.341128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.341154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.341357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.341531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.341559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.341777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.341972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.342001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.342226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.342605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.342675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.342897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.343248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.343692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.343919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.344086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.344395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.184 [2024-07-14 03:17:50.344452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.184 qpair failed and we were unable to recover it. 00:29:55.184 [2024-07-14 03:17:50.344666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.344872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.344897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.345111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.345337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.345402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.345638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.345858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.345895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.346088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.346313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.346338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.346511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.346840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.346911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.347142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.347318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.347357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.347601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.347798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.347827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.348010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.348165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.348209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.348440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.348642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.348670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.348940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.349117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.349141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.349391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.349746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.349804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.350003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.350197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.350225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.350462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.350674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.350699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.350910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.351125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.351152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.351313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.351543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.351584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.351828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.352000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.352029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.352256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.352551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.352612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.352785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.352987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.353013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.353248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.353630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.353678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.353852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.354129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.354158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.354385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.354675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.354730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.354934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.355356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.355739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.355966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.356147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.356357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.356412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.356613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.356779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.356807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.356975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.357168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.357196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.357393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.357588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.357615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.357794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.357997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.358023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.358206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.358516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.358573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.185 [2024-07-14 03:17:50.358771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.358961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.185 [2024-07-14 03:17:50.358989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.185 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.359211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.359523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.359590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.359815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.359974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.360000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.360233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.360566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.360619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.360819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.361192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.361620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.361845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.362048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.362207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.362234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.362529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.362734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.362761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.362951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.363176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.363201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.363380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.363608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.363636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.363807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.364009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.364038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.364208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.364560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.364612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.364855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.365270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.365677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.365949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.366178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.366493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.366518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.366698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.366912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.366938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.367146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.367368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.367418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.367612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.367805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.367833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.368062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.368253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.368281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.368443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.368668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.368695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.368874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.369058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.369083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.369272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.369502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.369530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.369892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.370378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.370753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.370982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.371159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.371335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.371362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.371583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.371786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.371811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.372008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.372270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.372322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.372526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.372725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.372755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.186 [2024-07-14 03:17:50.372961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.373163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.186 [2024-07-14 03:17:50.373191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.186 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.373534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.373724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.373753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.373981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.374251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.374299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.374502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.374706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.374731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.374945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.375098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.375140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.375310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.375700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.375753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.375988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.376211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.376239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.376421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.376613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.376642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.376840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.377040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.377068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.377453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.377857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.377925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.378146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.378534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.378598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.378812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.379011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.379040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.379209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.379412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.379440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.379750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.380005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.380034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.380254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.380576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.380626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.380818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.381026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.381054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.381287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.381460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.381487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.381746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.381997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.382026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.382230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.382451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.382479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.382682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.382863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.382894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.383071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.383218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.383243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.383564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.383792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.383820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.384008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.384204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.384233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.384424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.384621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.384669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.384826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.385199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.385623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.385846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.386082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.386375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.386435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.386656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.386849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.386895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.387072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.387236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.387261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.387537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.387713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.387738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.187 qpair failed and we were unable to recover it. 00:29:55.187 [2024-07-14 03:17:50.387947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.187 [2024-07-14 03:17:50.388159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.388184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.388336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.388516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.388541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.388715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.388898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.388924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.389098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.389246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.389271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.389419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.389679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.389703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.389896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.390298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.390685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.390893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.391035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.391214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.391239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.391395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.391569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.391594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.391799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.391982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.392008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.392185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.392389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.392414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.392592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.392768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.392793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.392970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.393125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.393151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.393328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.393597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.393627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.393851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.394259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.394589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.394817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.395000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.395203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.395228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.395425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.395601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.395627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.395802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.395992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.396017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.396219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.396396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.396425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.396610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.396760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.396785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.396989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.397165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.397190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.397381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.397557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.397582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.188 qpair failed and we were unable to recover it. 00:29:55.188 [2024-07-14 03:17:50.397757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.188 [2024-07-14 03:17:50.397934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.397960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.398141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.398320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.398344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.398551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.398726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.398754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.398952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.399311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.399693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.399874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.400057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.400238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.400267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.400445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.400648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.400672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.400832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.400987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.401013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.401220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.401397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.401422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.401602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.401750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.401775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.401966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.402320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.402672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.402940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.403145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.403321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.403347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.403521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.403734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.403759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.403914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.404333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.404662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.404886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.405087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.405293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.405318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.405522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.405690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.405719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.405950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.406156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.406183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.406387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.406566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.189 [2024-07-14 03:17:50.406593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.189 qpair failed and we were unable to recover it. 00:29:55.189 [2024-07-14 03:17:50.406830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.407237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.407589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.407790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.407954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.408342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.408752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.408933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.409113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.409292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.409319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.409527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.409698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.409723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.409929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.410111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.410139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.410361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.410650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.410699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.410927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.411149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.411177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.411373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.411595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.411620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.411823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.412185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.460 [2024-07-14 03:17:50.412565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.460 [2024-07-14 03:17:50.412773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.460 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.412993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.413312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.413370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.413574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.413754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.413778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.413940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.414326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.414685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.414861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.415062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.415243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.415267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.415419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.415649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.415674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.415853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.416208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.416567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.416762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.416978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.417184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.417212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.417381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.417732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.417784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.418007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.418372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.418749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.418984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.419158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.419360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.419385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.419578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.419770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.419797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.419976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.420357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.420737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.420914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.421094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.421244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.421270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.421430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.421600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.421625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.421799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.422200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.422589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.422785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.422963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.423348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.423707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.423938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.424126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.424445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.424493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.424670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.424840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.424871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.461 qpair failed and we were unable to recover it. 00:29:55.461 [2024-07-14 03:17:50.425053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.461 [2024-07-14 03:17:50.425227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.425254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.425410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.425555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.425580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.425726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.425918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.425944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.426119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.426292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.426316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.426484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.426699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.426727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.426912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.427263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.427668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.427844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.428027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.428206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.428231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.428499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.428737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.428765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.428988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.429161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.429185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.429365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.429543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.429568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.429765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.429977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.430003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.430179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.430326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.430350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.430573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.430744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.430770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.430950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.431328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.431688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.431911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.432084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.432392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.432443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.432647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.432822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.432847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.433062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.433382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.433435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.433640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.433789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.433813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.433996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.434272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.434327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.434522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.434701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.434726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.434909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.435090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.435115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.435402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.435719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.435769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.435991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.436246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.436303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.436507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.436685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.436710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.436910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.437269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.437738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.437939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.462 qpair failed and we were unable to recover it. 00:29:55.462 [2024-07-14 03:17:50.438140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.462 [2024-07-14 03:17:50.438317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.438342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.438524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.438705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.438730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.438911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.439107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.439135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.439356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.439663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.439711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.439962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.440136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.440177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.440380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.440644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.440695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.440876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.441079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.441106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.441307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.441528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.441579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.441806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.441982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.442011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.442209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.442406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.442461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.442704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.442906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.442934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.443104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.443310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.443335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.443514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.443763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.443814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.443998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.444183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.444210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.444432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.444813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.444874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.445106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.445354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.445382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.445538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.445734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.445761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.445953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.446188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.446216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.446516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.446743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.446771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.446975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.447211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.447263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.447444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.447653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.447679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.447894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.448098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.448122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.448465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.448719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.448745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.448899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.449313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.449684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.449891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.450120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.450445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.450499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.450720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.450917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.450946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.451146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.451328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.451354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.451557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.451744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.451771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.452002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.452175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.463 [2024-07-14 03:17:50.452203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.463 qpair failed and we were unable to recover it. 00:29:55.463 [2024-07-14 03:17:50.452430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.452679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.452737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.452964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.453127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.453152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.453330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.453691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.453757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.453985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.454193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.454218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.454398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.454576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.454601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.454774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.454979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.455005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.455202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.455409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.455437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.455630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.455854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.455889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.456087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.456323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.456374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.456600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.456799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.456827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.457033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.457237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.457262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.457412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.457666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.457717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.457918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.458116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.458144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.458344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.458582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.458640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.458835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.459074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.459100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.459307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.459569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.459621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.459820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.460227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.460595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.460822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.461004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.461228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.461255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.461453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.461652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.461680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.461885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.462068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.462093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.462290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.462577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.462626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.462827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.463028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.463056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.464 [2024-07-14 03:17:50.463245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.463484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.464 [2024-07-14 03:17:50.463509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.464 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.463660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.463836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.463885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.464062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.464269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.464294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.464444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.464640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.464673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.464879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.465103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.465131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.465358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.465573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.465624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.465822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.466008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.466034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.466228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.466565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.466618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.466848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.467317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.467649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.467902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.468061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.468238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.468262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.468467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.468671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.468722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.468932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.469346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.469743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.469973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.470153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.470333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.470358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.470536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.470766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.470793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.471021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.471224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.471250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.471611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.471854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.471889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.472086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.472260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.472286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.472491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.472720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.472747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.472975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.473204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.473232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.473436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.473697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.473755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.473979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.474328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.474383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.474585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.474781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.474809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.475032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.475357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.475426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.475773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.475996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.476024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.476221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.476476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.476501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.476703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.476900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.476929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.477120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.477347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.477371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.465 qpair failed and we were unable to recover it. 00:29:55.465 [2024-07-14 03:17:50.477622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.465 [2024-07-14 03:17:50.477837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.477874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.478091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.478370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.478395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.478602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.478807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.478840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.479034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.479310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.479358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.479638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.479854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.479892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.480101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.480301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.480329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.480504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.480683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.480724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.480916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.481353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.481765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.481965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.482161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.482310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.482336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.482519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.482712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.482739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.482940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.483308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.483709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.483917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.484148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.484357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.484381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.484583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.484765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.484793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.485016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.485337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.485400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.485598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.485784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.485811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.486010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.486263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.486321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.486589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.486760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.486788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.486987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.487256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.487305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.487504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.487686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.487710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.487898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.488121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.488149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.488412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.488606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.488668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.488856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.489083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.489111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.489336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.489613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.489663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.489893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.490063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.490088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.490303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.490620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.490671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.490893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.491117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.491145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.491314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.491506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.466 [2024-07-14 03:17:50.491533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.466 qpair failed and we were unable to recover it. 00:29:55.466 [2024-07-14 03:17:50.491724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.491949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.491977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.492167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.492482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.492542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.492743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.492936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.492965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.493164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.493414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.493465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.493662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.493852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.493886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.494090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.494309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.494337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.494526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.494787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.494837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.495025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.495225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.495310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.495485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.495704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.495762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.495955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.496347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.496768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.496960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.497165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.497426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.497480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.497823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.498282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.498637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.498852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.499095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.499397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.499454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.499660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.499830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.499858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.500076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.500399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.500452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.500651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.500806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.500846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.501028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.501253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.501309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.501636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.501886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.501915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.502094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.502327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.502377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.502575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.502741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.502769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.502974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.503355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.503737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.503961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.504155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.504301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.504342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.504549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.504749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.504773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.504964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.505145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.505187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.467 [2024-07-14 03:17:50.505374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.505569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.467 [2024-07-14 03:17:50.505593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.467 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.505795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.506025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.506053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.506254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.506573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.506626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.506821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.507241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.507627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.507847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.508028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.508198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.508225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.508601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.508816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.508844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.509053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.509334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.509384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.509585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.509772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.509799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.510007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.510181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.510209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.510482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.510737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.510765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.510970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.511194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.511221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.511393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.511699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.511757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.511962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.512131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.512159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.512486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.512686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.512714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.512911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.513116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.513140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.513340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.513642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.513696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.513925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.514123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.514151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.514389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.514584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.514611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.514811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.515197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.515669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.515855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.516063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.516369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.516422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.516653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.516846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.516880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.517106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.517329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.517379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.517603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.517796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.517823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.518002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.518240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.518292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.518490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.518703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.518752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.518977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.519136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.519163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.519365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.519648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.519697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.468 qpair failed and we were unable to recover it. 00:29:55.468 [2024-07-14 03:17:50.519892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.520095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.468 [2024-07-14 03:17:50.520122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.520350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.520675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.520724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.520917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.521313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.521700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.521927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.522147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.522468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.522515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.522743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.522910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.522938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.523127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.523465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.523523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.523752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.523948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.523977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.524144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.524364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.524389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.524545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.524721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.524747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.524950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.525149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.525179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.525372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.525610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.525669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.525891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.526198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.526249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.526477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.526677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.526706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.526906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.527324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.527801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.527996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.528195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.528442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.528492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.528688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.528864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.528900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.529100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.529317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.529342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.529522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.529703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.529730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.529902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.530285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.530788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.530993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.531145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.531315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.531339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.531514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.531684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.531709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.469 qpair failed and we were unable to recover it. 00:29:55.469 [2024-07-14 03:17:50.531880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.532056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.469 [2024-07-14 03:17:50.532080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.532320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.532630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.532682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.532886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.533308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.533710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.533941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.534114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.534433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.534486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.534679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.534888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.534914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.535091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.535266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.535291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.535470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.535797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.535850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.536078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.536236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.536261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.536460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.536656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.536680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.536918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.537289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.537759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.537956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.538175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.538497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.538555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.538775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.538978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.539006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.539198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.539426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.539451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.539628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.539830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.539859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.540059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.540244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.540271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.540471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.540649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.540673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.540852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.541085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.541114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.541412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.541740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.541766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.541949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.542155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.542183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.542415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.542664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.542715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.542916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.543137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.543169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.543414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.543661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.543714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.543879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.544098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.544126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.544321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.544582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.544633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.544854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.545247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.545669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.545919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.470 qpair failed and we were unable to recover it. 00:29:55.470 [2024-07-14 03:17:50.546127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.470 [2024-07-14 03:17:50.546367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.546416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.546616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.546758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.546782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.546938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.547370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.547798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.547978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.548154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.548357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.548382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.548570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.548732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.548759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.548951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.549160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.549186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.549360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.549561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.549588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.549751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.549980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.550005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.550178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.550467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.550517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.550740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.550931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.550960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.551187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.551464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.551516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.551719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.551941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.551970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.552173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.552329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.552356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.552551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.552730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.552772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.552974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.553350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.553759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.553975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.554201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.554470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.554523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.554720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.555258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.555659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.555916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.556142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.556404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.556459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.556668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.556883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.556912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.557140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.557441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.557489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.557759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.557968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.557996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.558192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.558508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.558571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.558773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.558957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.558982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.559186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.559516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.559576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.559803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.559954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.471 [2024-07-14 03:17:50.559979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.471 qpair failed and we were unable to recover it. 00:29:55.471 [2024-07-14 03:17:50.560174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.560392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.560419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.560638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.560876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.560904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.561131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.561329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.561356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.561586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.561803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.561831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.562041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.562337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.562393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.562570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.562722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.562749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.562991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.563217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.563244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.563464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.563771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.563832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.564027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.564248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.564276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.564472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.564795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.564854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.565070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.565348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.565398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.565717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.565940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.565969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.566133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.566449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.566499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.566726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.566949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.566977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.567178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.567407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.567459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.567840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.568085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.568113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.568310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.568645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.568705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.568929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.569295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.569763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.569984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.570141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.570312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.570339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.570537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.570738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.570765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.570988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.571143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.571168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.571346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.571587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.571650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.571846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.572263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.572709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.572929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.573130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.573308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.573333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.573527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.573680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.573705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.573848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.574047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.574075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.472 [2024-07-14 03:17:50.574271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.574421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.472 [2024-07-14 03:17:50.574448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.472 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.574755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.574977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.575006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.575228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.575554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.575609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.575827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.576263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.576656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.576856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.577099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.577301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.577329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.577557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.577755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.577783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.577983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.578148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.578176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.578402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.578628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.578654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.578833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.579234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.579631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.579886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.580051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.580406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.580458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.580658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.580887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.580916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.581089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.581285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.581309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.581539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.581738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.581762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.581936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.582090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.582115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.582320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.582580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.582605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.582791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.583290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.583645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.583879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.584090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.584366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.584421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.584598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.584802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.584827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.585054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.585229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.585254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.585557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.585824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.585849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.586017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.586264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.586289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.473 [2024-07-14 03:17:50.586448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.586663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.473 [2024-07-14 03:17:50.586715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.473 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.586907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.587086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.587112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.587349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.587577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.587602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.587805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.588201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.588752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.588978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.589204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.589522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.589570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.589778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.589932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.589958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.590164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.590471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.590530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.590727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.591005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.591034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.591254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.591560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.591608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.591803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.592230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.592651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.592887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.593072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.593398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.593458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.593658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.593832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.593858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.594046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.594271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.594299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.594496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.594842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.594923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.595146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.595475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.595525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.595747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.595941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.595969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.596165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.596317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.596341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.596515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.596711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.596738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.596906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.597318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.597739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.597931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.598128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.598351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.598378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.598677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.598894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.598922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.599085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.599364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.599416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.599646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.599853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.599888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.600122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.600446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.600495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.600783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.600976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.601005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.474 qpair failed and we were unable to recover it. 00:29:55.474 [2024-07-14 03:17:50.601167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.601443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.474 [2024-07-14 03:17:50.601494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.601691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.601885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.601913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.602105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.602361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.602410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.602664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.602864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.602905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.603103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.603263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.603291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.603456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.603642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.603685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.603855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.604135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.604186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.604411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.604661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.604711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.604900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.605087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.605114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.605278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.605541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.605594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.605817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.606018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.606046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.606380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.606643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.606668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.606877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.607072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.607099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.607297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.607533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.607595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.607820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.608293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.608776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.608999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.609178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.609327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.609351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.609579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.609741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.609769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.609994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.610196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.610223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.610444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.610782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.610833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.611045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.611272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.611328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.611520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.611737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.611762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.611942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.612362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.612723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.612972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.613199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.613457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.613510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.613745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.613918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.613946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.614168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.614361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.614388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.614553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.614772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.614799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.615013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.615183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.615246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.475 qpair failed and we were unable to recover it. 00:29:55.475 [2024-07-14 03:17:50.615527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.475 [2024-07-14 03:17:50.615770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.615798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.615981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.616162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.616188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.616364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.616620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.616676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.616876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.617101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.617129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.617483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.617841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.617911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.618113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.618395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.618446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.618682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.618908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.618937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.619138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.619336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.619361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.619562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.619738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.619764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.619961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.620190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.620215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.620388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.620637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.620689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.620889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.621094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.621119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.621378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.621670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.621697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.621888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.622112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.622139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.622336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.622562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.622595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.622795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.622985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.623013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.623202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.623560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.623611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.623808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.624258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.624776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.624994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.625162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.625407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.625456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.625685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.625877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.625905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.626082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.626236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.626261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.626452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.626781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.626831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.627055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.627282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.627307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.627541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.627800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.627828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.628045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.628270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.628298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.628473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.628688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.628715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.628978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.629173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.629200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.629394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.629592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.629617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.629796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.630020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.476 [2024-07-14 03:17:50.630049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.476 qpair failed and we were unable to recover it. 00:29:55.476 [2024-07-14 03:17:50.630223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.630449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.630476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.630682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.630889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.630918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.631130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.631441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.631503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.631707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.631901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.631930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.632125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.632350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.632397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.632743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.632965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.632991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.633203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.633552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.633603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.633799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.633983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.634010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.634207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.634569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.634626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.634825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.635210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.635604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.635811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.635988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.636335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.636688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.636910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.637082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.637256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.637280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.637454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.637632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.637657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.637842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.638193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.638540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.638762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.638917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.639272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.639678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.639894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.640050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.640226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.640251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.640414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.640600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.640627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.640824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.640994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.641022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.477 [2024-07-14 03:17:50.641212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.641416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.477 [2024-07-14 03:17:50.641440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.477 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.641595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.641772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.641797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.641981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.642297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.642698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.642899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.643081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.643259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.643284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.643465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.643610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.643635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.643820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.644217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.644567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.644827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.645040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.645188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.645213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.645405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.645750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.645796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.646004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.646431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.646751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.646956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.647136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.647319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.647343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.647520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.647701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.647726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.647902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.648314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.648640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.648847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.649033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.649254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.649306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.649501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.649673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.649697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.649852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.650057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.650085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.650434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.650653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.650682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.650881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.651235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.651589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.651790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.651944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.652308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.652752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.652953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.653131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.653332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.653357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.478 qpair failed and we were unable to recover it. 00:29:55.478 [2024-07-14 03:17:50.653542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.653713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.478 [2024-07-14 03:17:50.653738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.653939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.654271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.654655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.654849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.655038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.655395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.655746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.655944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.656136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.656473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.656535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.656754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.656986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.657015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.657216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.657473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.657526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.657746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.657928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.657954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.658133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.658309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.658334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.658541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.658722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.658747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.658957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.659308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.659686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.659894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.660074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.660274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.660299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.660506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.660688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.660713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.660921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.661306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.661645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.661881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.662028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.662401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.662787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.662986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.663189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.663345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.663370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.663572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.663747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.663772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.663942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.664092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.664117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.664324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.664615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.664676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.664883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.665306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.665663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.665862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.479 qpair failed and we were unable to recover it. 00:29:55.479 [2024-07-14 03:17:50.666052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.479 [2024-07-14 03:17:50.666201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.666226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.666403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.666611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.666636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.666815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.666999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.667025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.667173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.667319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.667344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.667515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.667705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.667732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.667937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.668349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.668756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.668967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.669143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.669319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.669344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.669496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.669643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.669667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.669842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.670203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.670570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.670770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.670978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.671303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.671684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.671932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.672086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.672266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.672290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.672476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.672651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.672676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.672828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.672978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.673004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.673186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.673389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.673414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.673591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.673732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.673758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.673901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.674299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.674680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.674852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.675037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.675420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.675776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.675980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.676158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.676332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.676357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.676516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.676663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.676687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.676847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.677009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.677034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.677182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.677383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.677408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.480 qpair failed and we were unable to recover it. 00:29:55.480 [2024-07-14 03:17:50.677563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.480 [2024-07-14 03:17:50.677712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.677736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.677921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.678329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.678703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.678940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.679120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.679293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.679317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.679466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.679649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.679678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.679861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.680246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.680616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.680812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.680997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.681349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.681749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.681948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.682127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.682301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.682327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.682476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.682682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.682707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.682850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.683045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.683071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.683308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.683578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.683630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.683854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.684236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.684618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.684817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.685000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.685177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.685202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.685408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.685607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.685632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.685831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.686197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.686576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.686777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.686956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.687319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.481 [2024-07-14 03:17:50.687685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.481 [2024-07-14 03:17:50.687898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.481 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.688077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.688253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.688277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.688478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.688655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.688680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.688827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.689223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.689571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.689770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.689949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.690305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.690682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.690857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.691048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.691228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.691253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.691432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.691609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.691636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.691886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.692313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.692707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.692909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.693057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.693434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.693779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.693979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.694159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.694346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.694371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.694548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.694726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.694751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.694953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.695151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.695179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.695495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.695688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.695713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.695937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.696317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.696688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.696924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.697103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.697285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.697338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.697541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.697727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.697755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.697933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.698284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.698687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.698911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.699100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.699318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.699346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.699531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.699708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.699732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.482 qpair failed and we were unable to recover it. 00:29:55.482 [2024-07-14 03:17:50.699915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.700136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.482 [2024-07-14 03:17:50.700164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.483 qpair failed and we were unable to recover it. 00:29:55.483 [2024-07-14 03:17:50.700456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.483 [2024-07-14 03:17:50.700801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.483 [2024-07-14 03:17:50.700862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.483 qpair failed and we were unable to recover it. 00:29:55.483 [2024-07-14 03:17:50.701070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.701374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.701428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.701644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.701848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.701880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.702034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.702419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.702775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.702987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.703165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.703380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.703406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.703564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.703711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.703735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.703894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.704272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.704668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.704900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.705045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.705427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.705746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.705949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.706135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.706334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.706358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.706510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.706688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.706713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.706862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.707215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.707652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.707825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.708053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.708228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.708253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.708433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.708579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.708603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.708808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.708985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.709010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.709163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.709311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.709337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.709511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.709660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.709684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.709863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.710305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.710710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.710948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.711129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.711363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.711391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.711667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.711911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.711939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.712168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.712411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.712462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.712669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.712872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.712900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.713094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.713405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.713455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.713801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.714242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.714670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.714929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.715098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.715289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.715318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.715553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.715726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.715754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.715948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.716123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.716152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.754 [2024-07-14 03:17:50.716358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.716582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.754 [2024-07-14 03:17:50.716647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.754 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.716862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.717066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.717093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.717363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.717755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.717801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.717987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.718360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.718777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.718970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.719165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.719337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.719365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.719558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.719748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.719778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.719982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.720186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.720211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.720411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.720615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.720640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.720845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.721277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.721783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.721985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.722187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.722453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.722505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.722726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.722954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.723010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.723187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.723361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.723406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.723608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.723800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.723828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.724058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.724257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.724282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.724476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.724750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.724778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.724996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.725165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.725192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.725366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.725586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.725637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.725871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.726049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.726074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.726262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.726583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.726632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.726826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.727225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.727648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.727909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.728106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.728454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.728505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.728720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.728904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.728930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.729116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.729382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.729433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.729662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.729841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.729871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.730049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.730273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.730299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.730455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.730597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.730621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.730838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.731267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.731689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.731878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.732056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.732251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.732279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.732482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.732735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.732793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.732964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.733163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.733190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.733465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.733661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.733688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.733921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.734318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.734702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.734907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.735108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.735284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.735309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.735462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.735634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.735659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.735809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.736030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.736058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.736286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.736525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.736572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.736775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.736974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.737003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.737198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.737406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.737439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.737634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.737857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.737891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.738117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.738366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.738403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.738574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.738734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.738764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.738998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.739222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.739251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.739419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.739613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.739640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.739835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.740273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.740691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.740918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.741122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.741341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.741369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.741539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.741705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.741732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.741950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.742151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.742179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.742399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.742690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.742740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.742968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.743170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.743204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.743380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.743598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.743648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.743828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.744049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.744075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.744303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.744644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.744691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.744907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.745130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.745157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.745376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.745660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.745713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.746007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.746273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.746320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.746511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.746731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.746782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.746989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.747163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.747189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.747360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.747563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.747589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.747814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.748018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.748052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.748251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.748414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.748441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.755 qpair failed and we were unable to recover it. 00:29:55.755 [2024-07-14 03:17:50.748626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.755 [2024-07-14 03:17:50.748801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.748828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.749031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.749202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.749231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.749426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.749617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.749645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.749875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.750078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.750103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.750282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.750551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.750604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.750798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.751190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.751563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.751791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.751995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.752191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.752224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.752454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.752656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.752681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.752924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.753155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.753181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.753334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.753597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.753651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.753923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.754338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.754788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.754984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.755157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.755352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.755377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.755577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.755780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.755809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.755985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.756210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.756263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.756477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.756652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.756694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.756898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.757091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.757118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.757316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.757489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.757550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.757756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.757977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.758003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.758233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.758483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.758508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.758710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.758885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.758912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.759114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.759308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.759335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.759532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.759704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.759728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.760000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.760179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.760204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.760421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.760588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.760616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.760808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.761270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.761730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.761955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.762145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.762320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.762362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.762567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.762794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.762821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.763029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.763240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.763267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.763434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.763722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.763772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.763948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.764125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.764165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.764395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.764636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.764687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.764908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.765072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.765100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.765372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.765749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.765807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.766022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.766318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.766370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.766596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.766802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.766829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.767071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.767228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.767258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.767601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.767820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.767847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.768029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.768255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.768318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.768522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.768797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.768847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.769048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.769251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.769276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.769553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.769765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.769792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.769989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.770348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.770723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.770967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.771142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.771305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.771333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.771527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.771719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.771746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.771946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.772244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.772297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.772495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.772698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.772724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.772934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.773342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.773765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.773975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.774120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.774322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.774350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.774545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.774751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.774776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.774961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.775272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.775331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.775502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.775730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.775758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.775955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.776131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.776158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.776478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.776733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.756 [2024-07-14 03:17:50.776761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.756 qpair failed and we were unable to recover it. 00:29:55.756 [2024-07-14 03:17:50.776957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.777239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.777288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.777567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.777751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.777776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.777955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.778138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.778163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.778422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.778714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.778764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.778940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.779138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.779166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.779356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.779583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.779610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.779810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.780235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.780625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.780823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.781011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.781185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.781209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.781408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.781702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.781752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.781980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.782267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.782317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.782548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.782769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.782797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.782978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.783430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.783767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.783942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.784130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.784324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.784384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.784636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.784836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.784864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.785103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.785255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.785299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.785578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.785783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.785808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.786011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.786220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.786244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.786388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.786626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.786683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.786879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.787310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.787715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.787973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.788167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.788432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.788485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.788692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.788887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.788915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.789297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.789672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.789723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.789916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.790116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.790144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.790325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.790550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.790600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.790787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.790984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.791012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.791292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.791672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.791723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.791953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.792244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.792294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.792518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.792690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.792720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.792897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.793278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.793704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.793910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.794087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.794265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.794291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.794474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.794666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.794692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.794876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.795278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.795701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.795942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.796138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.796317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.796342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.796557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.796769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.796799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.797015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.797279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.797333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.797530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.797728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.797756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.797955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.798147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.798174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.798508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.798742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.798770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.798961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.799180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.799250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.799419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.799635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.799685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.799888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.800107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.800132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.800420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.800756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.800805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.801034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.801215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.801239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.801420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.801565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.801591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.801822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.802059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.802088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.802423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.802667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.802692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.802848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.803008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.803034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.803218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.803384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.803411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.803791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.804033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.804059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.804231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.804488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.804544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.804720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.805001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.805031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.805207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.805491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.805552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.805781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.805986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.806011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.806281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.806694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.806753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.757 qpair failed and we were unable to recover it. 00:29:55.757 [2024-07-14 03:17:50.806975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.757 [2024-07-14 03:17:50.807180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.807236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.807461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.807619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.807644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.807797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.807996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.808024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.808290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.808704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.808771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.808970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.809202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.809254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.809486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.809778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.809841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.810070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.810323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.810368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.810626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.810838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.810873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.811096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.811284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.811329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.811507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.811682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.811706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.811922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.812350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.812765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.812965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.813166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.813306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.813347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.813541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.813720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.813745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.813893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.814360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.814717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.814925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.815106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.815311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.815357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.815548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.815736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.815764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.815976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.816176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.816204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.816429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.816620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.816646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.816904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.817060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.817089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.817267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.817454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.817485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.817708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.817998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.818024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.818211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.818354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.818379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.818631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.818847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.818881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.819106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.819325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.819370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.819591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.819759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.819787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.819998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.820143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.820168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.820373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.820583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.820615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.820832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.821283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.821733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.821962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.822109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.822305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.822351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.822548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.822767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.822794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.823006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.823173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.823201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.823366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.823586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.823631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.823852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.824078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.824103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.824328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.824569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.824596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.824798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.825231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.825681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.825923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.826129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.826335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.826380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.826601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.826768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.826796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.827029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.827239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.827285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.827489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.827663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.827687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.827894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.828067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.828092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.828315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.828558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.828606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.828776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.829209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.829609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.829862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.830046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.830223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.830248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.830424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.830624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.830653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.830951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.831126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.831167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.831364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.831556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.831585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.831782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.831975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.832001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.832167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.832380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.832424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.832672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.832898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.832924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.833098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.833345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.833390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.758 [2024-07-14 03:17:50.833585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.833832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.758 [2024-07-14 03:17:50.833857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.758 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.834044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.834191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.834216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.834416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.834637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.834665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.834892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.835069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.835095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.835342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.835581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.835608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.835775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.835985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.836011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.836187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.836334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.836359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.836559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.836759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.836784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.836963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.837321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.837763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.837985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.838158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.838352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.838379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.838570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.838779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.838806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.839016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.839214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.839241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.839440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.839662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.839690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.839913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.840082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.840109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.840304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.840552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.840597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.840829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.841221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.841589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.841806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.842012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.842238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.842266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.842470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.842693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.842722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.842921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.843318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.843788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.843985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.844143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.844324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.844348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.844494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.844685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.844713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.844932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.845095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.845121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.845316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.845578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.845621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.845823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.846215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.846573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.846765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.846993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.847169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.847194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.847396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.847611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.847656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.847832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.848244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.848733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.848944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.849169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.849392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.849443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.849641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.849795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.849820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.850014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.850289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.850334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.850672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.850916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.850945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.851178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.851427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.851452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.851653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.851832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.851860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.852054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.852260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.852284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.852510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.852661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.852688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.852907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.853083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.853108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.853313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.853522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.853566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.853800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.854273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.854740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.854964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.855168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.855382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.855426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.855601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.855770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.855798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.856001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.856192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.856223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.856464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.856679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.856727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.856893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.857069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.857110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.857326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.857495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.857523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.857745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.858265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.858721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.858905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.859053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.859227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.859268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.859460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.859700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.859728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.859903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.860278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.860693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.860899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.759 [2024-07-14 03:17:50.861057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.861293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.759 [2024-07-14 03:17:50.861338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.759 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.861537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.861725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.861753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.861943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.862176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.862204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.862375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.862612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.862662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.862828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.863247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.863706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.863946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.864158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.864365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.864410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.864665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.864858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.864891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.865088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.865291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.865316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.865518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.865748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.865793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.865986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.866185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.866212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.866409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.866681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.866726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.866916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.867131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.867176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.867353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.867555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.867582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.867777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.868021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.868065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.868255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.868523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.868551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.868778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.868971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.869000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.869195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.869367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.869413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.869584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.869857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.869891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.870062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.870313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.870358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.870585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.870751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.870778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.870971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.871347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.871769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.871986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.872192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.872370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.872409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.872642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.872820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.872848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.873073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.873335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.873360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.873578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.873783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.873811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.874007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.874209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.874254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.874478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.874690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.874738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.874916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.875095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.875120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.875379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.875624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.875653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.875880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.876313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.876669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.876883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.877047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.877263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.877292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.877515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2131699 Killed "${NVMF_APP[@]}" "$@" 00:29:55.760 [2024-07-14 03:17:50.877737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.877764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.877984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:55.760 [2024-07-14 03:17:50.878183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.878208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 03:17:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 03:17:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:55.760 [2024-07-14 03:17:50.878414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:55.760 [2024-07-14 03:17:50.878611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.760 [2024-07-14 03:17:50.878636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.878808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.879226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.879625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.879848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.880057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.880289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.880335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.880599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.880796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.880824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.881003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.881200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.881231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.881440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.881640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.881686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.881855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.882037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.882065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.882231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- nvmf/common.sh@469 -- # nvmfpid=2132274 00:29:55.760 03:17:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:55.760 [2024-07-14 03:17:50.882444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- nvmf/common.sh@470 -- # waitforlisten 2132274 00:29:55.760 [2024-07-14 03:17:50.882490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 03:17:50 -- common/autotest_common.sh@819 -- # '[' -z 2132274 ']' 00:29:55.760 [2024-07-14 03:17:50.882691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.760 [2024-07-14 03:17:50.882879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.882908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 03:17:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:55.760 03:17:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.760 [2024-07-14 03:17:50.883108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:55.760 [2024-07-14 03:17:50.883280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 03:17:50 -- common/autotest_common.sh@10 -- # set +x 00:29:55.760 [2024-07-14 03:17:50.883308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.883508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.883693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.883738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.883955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.884337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.760 qpair failed and we were unable to recover it. 00:29:55.760 [2024-07-14 03:17:50.884729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.760 [2024-07-14 03:17:50.884983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.885136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.885354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.885380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.885600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.885843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.885873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.886075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.886290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.886334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.886538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.886729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.886757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.886960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.887384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.887767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.887996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.888163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.888336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.888361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.888510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.888689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.888713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.888874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.889342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.889766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.889973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.890134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.890306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.890331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.890480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.890662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.890712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.890889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.891244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.891597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.891777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.891933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.892307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.892658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.892890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.893083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.893278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.893307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.893502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.893672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.893704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.893903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.894338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.894746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.894955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.895109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.895275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.895303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.895523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.895712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.895741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.895934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.896286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.896638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.896811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.896992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.897371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.897729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.897910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.898091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.898253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.898279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.898470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.898630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.898657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.898923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.899292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.899620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.899802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.899992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.900208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.900234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.900420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.900569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.900595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.900817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.900980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.901007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.901166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.901335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.901364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.901519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.901700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.901725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.901901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.902271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.902725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.902911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.903107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.903251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.903276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.903457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.903657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.903684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.903898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.904263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.904646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.904845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4b04000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Write completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 Read completed with error (sct=0, sc=8) 00:29:55.761 starting I/O failed 00:29:55.761 [2024-07-14 03:17:50.905219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:55.761 [2024-07-14 03:17:50.905396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.905619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.905649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.905846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.906216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.906599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.906831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.907018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.907411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.907767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.907950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.761 [2024-07-14 03:17:50.908103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.908286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.761 [2024-07-14 03:17:50.908311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.761 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.908502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.908680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.908706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.908869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.909284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.909615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.909852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.910021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.910345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.910693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.910900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.911057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.911242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.911275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.911462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.911606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.911631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.911830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.911986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.912012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.912190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.912381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.912406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.912589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.912790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.912816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.912980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.913309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.913670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.913879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.914060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.914241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.914266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.914467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.914625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.914652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.914874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.915260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.915617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.915841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.916023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.916377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.916760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.916939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.917117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.917292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.917316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.917497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.917700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.917726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.917895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.918245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.918602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.918840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.919065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.919244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.919269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.919416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.919616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.919640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.919817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.919986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.920012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.920161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.920367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.920392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.920546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.920699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.920725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.920877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.921267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.921701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.921898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.922076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.922251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.922276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.922458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.922602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.922627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.922806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.922991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.923016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.923208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.923368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.923393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.923570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.923751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.923776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.923971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.924387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.924769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.924972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.925151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.925332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.925357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.925542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.925718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.925744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.925921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.926271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.926686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.926897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.926948] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:29:55.762 [2024-07-14 03:17:50.927038] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.762 [2024-07-14 03:17:50.927094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.927255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.927280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.927484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.927659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.927684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.927832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.928260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.928636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.928836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.929011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.929391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.929740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.929946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.930128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.930289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.930315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.930462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.930611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.930636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.930844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.931269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.931686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.931878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.932056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.932261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.932286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.932464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.932643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.762 [2024-07-14 03:17:50.932668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.762 qpair failed and we were unable to recover it. 00:29:55.762 [2024-07-14 03:17:50.932844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.933187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.933566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.933771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.933953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.934133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.934159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.934338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.934543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.934568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.934774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.934980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.935006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.935211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.935365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.935392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.935550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.935723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.935749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.935924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.936276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.936643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.936844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.937027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.937181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.937206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.937395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.937552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.937578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.937789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.937975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.938001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.938153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.938296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.938321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.938499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.938676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.938702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.938871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.939233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.939587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.939769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.939987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.940181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.940208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.940389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.940577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.940602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.940784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.940988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.941014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.941170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.941346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.941370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.941546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.941707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.941733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.941912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.942242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.942604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.942829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.943011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.943217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.943243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.943444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.943645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.943670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.943845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.944268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.944665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.944857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.945076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.945417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.945782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.945991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.946168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.946345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.946370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.946521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.946696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.946721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.946899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.947247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.947580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.947754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.947944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.948337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.948719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.948947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.949127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.949299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.949324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.949529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.949684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.949710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.949861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.950238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.950648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.950849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.951030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.951409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.951736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.951967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.952137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.952313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.952338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.952491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.952643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.952668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.952821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.953248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.953609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.953786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.953976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.954325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.954702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.954889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.955070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.955432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.955785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.955972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.956179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.956355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.956381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.956563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.956717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.956747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.956925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.957339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.957690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.957897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.958105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.958260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.958287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.958435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.958611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.958637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.958815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.959235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.959571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.959742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.959895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.960287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.960668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.763 [2024-07-14 03:17:50.960843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.763 qpair failed and we were unable to recover it. 00:29:55.763 [2024-07-14 03:17:50.961006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.961387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.961774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.961973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.962177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.962356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.962381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.962579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.962755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.962780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.962931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.963342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.963670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.963891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.964045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.964454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.964808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.964998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.965169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.764 [2024-07-14 03:17:50.965368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.965393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.965574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.965780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.965806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.966019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.966176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.966202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.966379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.966559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.966585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.966773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.966986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.967013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.967190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.967402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.967427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.967599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.967769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.967794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.967952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.968391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.968748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.968959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.969139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.969343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.969368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.969552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.969728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.969753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.969914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.970303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.970654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.970859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.971015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.971402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.971787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.971996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.972159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.972367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.972392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.972544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.972718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.972743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.972920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.973313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.973696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.973903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.974079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.974265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.974290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.974488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.974675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.974701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.974908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.975267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.975674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.975878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.976040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.976215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.976241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.976412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.976599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.976624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.976802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.976999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.977026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.977206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.977358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.977384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.977556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.977731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.977755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.977941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.978315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.978640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.978847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.979027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.979203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.979228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.979434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.979636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.979662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.979875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.980245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.980611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.980811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.980963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.981404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.981785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.981959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.982158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.982331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.982356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.982514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.982692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.982718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.982872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.983198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.983554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.983725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.983903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.984294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.984679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.984892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.985045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.985405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.985787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.985992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.986146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.986350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.986376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.986549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.986726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.764 [2024-07-14 03:17:50.986752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.764 qpair failed and we were unable to recover it. 00:29:55.764 [2024-07-14 03:17:50.986909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.987348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.987667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.987872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.988050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.988203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.988228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.988385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.988568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.988593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.988769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.988975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.989001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.989157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.989302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.989326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.989501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.989706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.989731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.989884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.990242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.990623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.990802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.990993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.991352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.991683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.991892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.992071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.992272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.992297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.992503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.992659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.992684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.992861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.993319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.993668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.765 [2024-07-14 03:17:50.993846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:55.765 qpair failed and we were unable to recover it. 00:29:55.765 [2024-07-14 03:17:50.994006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.994352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.994702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.994905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.995073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.995459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.995786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.995986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.996137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.996469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.996798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.996999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.036 [2024-07-14 03:17:50.997177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.997353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.036 [2024-07-14 03:17:50.997379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.036 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.997563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.997738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.997764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.997920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.998312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.998697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998767] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:56.037 [2024-07-14 03:17:50.998876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.998902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.999053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.999255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.999281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.999456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.999631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:50.999656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:50.999858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.000284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.000640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.000842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.001006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.001383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.001742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.001920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.002085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.002416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.002775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.002990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.003167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.003345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.003370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.003565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.003750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.003777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.003932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.004121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.004146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.004343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.004496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.004522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.004700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.004985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.005012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.005171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.005349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.005375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.005554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.005761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.005788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.037 qpair failed and we were unable to recover it. 00:29:56.037 [2024-07-14 03:17:51.005954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.037 [2024-07-14 03:17:51.006130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.006155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.006382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.006585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.006610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.006771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.006956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.006983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.007160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.007336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.007361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.007541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.007718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.007744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.007922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.008321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.008676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.008940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.009125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.009330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.009355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.009535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.009710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.009734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.009917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.010293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.010681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.010877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.011097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.011290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.011314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.011500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.011671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.011696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.011861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.012259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.012666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.012841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.013010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.013188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.013213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.013418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.013596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.013623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.013830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.014259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.014643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.014850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.015017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.015224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.015249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.038 qpair failed and we were unable to recover it. 00:29:56.038 [2024-07-14 03:17:51.015397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.015578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.038 [2024-07-14 03:17:51.015604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.015810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.015961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.015987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.016132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.016308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.016335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.016512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.016725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.016751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.016943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.017291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.017653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.017874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.019085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.019322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.019350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.019545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.019722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.019748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.019942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.020345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.020703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.020911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.021091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.021283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.021309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.021488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.021648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.021673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.021824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.022189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.022610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.022817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.023036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.023395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.023762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.023953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.024129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.024308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.024334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.024541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.024721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.024746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.024925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.025281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.025630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.025827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.026039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.026182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.026207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.026387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.026545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.026570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.039 qpair failed and we were unable to recover it. 00:29:56.039 [2024-07-14 03:17:51.026743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.039 [2024-07-14 03:17:51.026922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.026949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.027150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.027338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.027363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.027542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.027719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.027744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.027952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.028100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.028127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.028671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.028877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.028906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.029938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.030145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.030181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.031440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.031665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.031695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.031878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.032065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.032091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.032277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.032462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.032490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.032671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.033188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.033216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.033429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.033634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.033660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.033848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.034037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.034065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.034274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.034464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.034491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.035320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.035524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.035550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.035741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.036309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.036336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.036535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.036711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.036737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.036921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.037295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.037667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.037845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.038002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.038379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.038742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.038926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.039128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.039340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.039366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.039550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.039703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.039730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.039934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.040120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.040145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.040 qpair failed and we were unable to recover it. 00:29:56.040 [2024-07-14 03:17:51.040301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.040 [2024-07-14 03:17:51.040460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.040487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.040642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.040827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.040857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.041014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.041370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.041726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.041936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.042099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.042262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.042288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.042468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.042617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.042660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.042875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.043246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.043594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.043793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.043953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.044324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.044721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.044908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.045057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.045434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.045818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.045999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.046163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.046344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.046369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.046561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.046871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.046898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.047055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.047421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.041 qpair failed and we were unable to recover it. 00:29:56.041 [2024-07-14 03:17:51.047779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.041 [2024-07-14 03:17:51.047988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.048140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.048321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.048347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.048530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.048709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.048736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.048907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.049252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.049644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.049881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.050038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.050381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.050789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.050996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.051156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.051352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.051377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.051584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.051744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.051769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.051929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.052280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.052648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.052849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.053040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.053440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.053802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.053995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.054829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.055014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.055042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.055963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.056128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.056162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.056849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.057047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.057074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.057745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.057936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.057965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.058132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.058320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.058346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.058549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.058723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.058749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.058894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.059078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.059104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.042 qpair failed and we were unable to recover it. 00:29:56.042 [2024-07-14 03:17:51.059276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.042 [2024-07-14 03:17:51.059429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.059456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.059639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.059826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.059851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.060019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.060218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.060244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.060433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.060621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.060649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.060832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.061254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.061644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.061860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.062020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.062173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.062198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.062393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.062599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.062625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.063530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.063713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.063740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.063916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.064318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.064690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.064913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.065069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.065252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.065288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.065466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.065644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.065670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.065822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.066200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.066628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.066864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.067082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.067272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.067298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.067503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.067678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.067704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.067893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.068310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.068702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.068939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.043 qpair failed and we were unable to recover it. 00:29:56.043 [2024-07-14 03:17:51.069093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.043 [2024-07-14 03:17:51.069834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.069894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.070082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.070254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.070279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.070469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.070654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.070691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.070876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.071241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.071665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.071842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.072006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.072186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.072211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.072369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.072542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.072568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.072765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.072991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.073017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.073203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.073363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.073392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.073588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.073785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.073810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.073999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.074154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.074180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.074329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.075112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.075142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.075378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.075598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.075624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.075828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.076245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.076627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.076844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.077037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.077231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.077256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.077440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.077619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.077647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.078412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.078595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.078620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.078779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.079245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.079636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.079839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.080025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.080206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.080231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.080421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.080608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.044 [2024-07-14 03:17:51.080635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.044 qpair failed and we were unable to recover it. 00:29:56.044 [2024-07-14 03:17:51.080812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.081205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.081602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.081783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.081979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.082160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.082186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.082368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.083257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.083308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.083522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.084456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.084499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.084714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.084923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.084950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.085113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.085269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.085294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.085453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.085646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.085675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.085894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.086287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.086639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.086827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.087018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.087400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.087785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.087996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.088152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.088342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.088369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.088524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.088707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.088737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.088917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.089311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.089644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.089850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.090041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.090406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.090765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.090954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.091128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.091295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.091323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.091506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.091659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.091685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.091837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.092021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.092049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.092236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.092409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.092435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.045 qpair failed and we were unable to recover it. 00:29:56.045 [2024-07-14 03:17:51.092605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.045 [2024-07-14 03:17:51.092796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.092823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.093010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.093393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.093750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.093959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.094137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.094345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.094371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.094529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.094736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.094762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.094947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.095303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.095701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.095915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.096064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.096217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.096242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.096417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.096614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.096640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.096852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.097231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.097540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.097863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.097856] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:56.046 [2024-07-14 03:17:51.098020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.098046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.098062] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.046 [2024-07-14 03:17:51.098083] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.046 [2024-07-14 03:17:51.098097] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.046 [2024-07-14 03:17:51.098231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.098175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:56.046 [2024-07-14 03:17:51.098247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:56.046 [2024-07-14 03:17:51.098280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:56.046 [2024-07-14 03:17:51.098387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.098385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:56.046 [2024-07-14 03:17:51.098412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.098609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.098792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.098818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.099015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.099364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.099722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.099915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.100100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.100255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.100286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.046 qpair failed and we were unable to recover it. 00:29:56.046 [2024-07-14 03:17:51.100461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.046 [2024-07-14 03:17:51.100609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.100635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.100813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.100967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.100994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.101140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.101325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.101350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.101541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.101698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.101723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.101901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.102275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.102652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.102958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.103218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.103374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.103400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.103582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.103749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.103775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.103997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.104154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.104179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.104354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.104507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.104533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.104743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.104989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.105015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.105199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.105379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.105418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.105598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.105752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.105778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.105953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.106103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.106130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.106320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.106489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.106514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.106784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.107037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.107064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.107225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.107393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.107420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.047 qpair failed and we were unable to recover it. 00:29:56.047 [2024-07-14 03:17:51.107616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.047 [2024-07-14 03:17:51.107795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.107820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.108013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.108376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.108753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.108959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.109136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.109310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.109336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.109493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.109674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.109700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.109862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.110305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.110624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.110802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.110980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.111420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.111798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.111986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.112158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.112344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.112380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.112555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.112741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.112766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.112937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.113291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.113698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.113910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.114069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.114252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.114278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.114466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.114642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.114669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.114817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.115194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.115594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.115787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.115966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.116204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.116230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.048 [2024-07-14 03:17:51.116420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.116621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.048 [2024-07-14 03:17:51.116647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.048 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.116802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.116966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.116992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.117155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.117362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.117393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.117541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.117725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.117751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.117929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.118297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.118651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.118863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.119085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.119229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.119255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.119426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.119602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.119627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.119799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.119983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.120009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.120159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.120337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.120363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.120542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.120755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.120781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.120950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.121153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.121183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.121482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.121745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.121771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.121943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.122333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.122713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.122924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.123172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.123358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.123383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.123570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.123748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.123773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.123968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.124172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.124210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.124498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.124689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.124715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.124903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.125082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.125107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.125257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.125407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.125434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.049 qpair failed and we were unable to recover it. 00:29:56.049 [2024-07-14 03:17:51.125638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.049 [2024-07-14 03:17:51.125808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.125834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.126033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.126371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.126724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.126910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.127093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.127252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.127278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.127640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.127666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.127830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.127985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.128011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.128185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.128385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.128411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.128590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.128768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.128793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.129006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.129374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.129754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.129937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.130210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.130397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.130434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.130603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.130884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.130921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.131135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.131335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.131362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.131516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.131718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.131743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.131913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.132278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.132667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.132889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.133054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.133424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.133773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.133980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.134138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.134303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.134328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.134593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.134741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.134767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.050 qpair failed and we were unable to recover it. 00:29:56.050 [2024-07-14 03:17:51.134978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.135156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.050 [2024-07-14 03:17:51.135182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.135332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.135487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.135514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.135697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.135850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.135891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.136045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.136428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.136774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.136983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.137165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.137344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.137370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.137552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.137720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.137746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.137899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.138055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.138081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.138297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.138449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.138476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.138629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.139273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.139304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.139486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.139665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.139699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.139956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.140322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.140657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.140833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.141033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.141181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.141207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.141396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.141538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.141565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.141829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.142223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.142554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.142759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.142909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.143223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.143596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.143772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.143926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.144108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.144135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.051 qpair failed and we were unable to recover it. 00:29:56.051 [2024-07-14 03:17:51.144337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.051 [2024-07-14 03:17:51.144480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.144506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.144681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.144840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.144899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.145096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.145290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.145316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.145502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.145709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.145734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.145894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.146268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.146625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.146805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.146996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.147340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.147697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.147910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.148079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.148264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.148288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.148561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.148733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.148758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.148933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.149282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.149653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.149824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.150075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.150291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.150316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.150463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.150620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.150646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.150790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.151173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.151537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.151743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.052 qpair failed and we were unable to recover it. 00:29:56.052 [2024-07-14 03:17:51.151916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.052 [2024-07-14 03:17:51.152068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.152093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.152312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.152460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.152486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.152698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.152877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.152903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.153060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.153237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.153263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.153445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.153607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.153631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.153803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.153983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.154009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.154175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.154335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.154359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.154559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.154764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.154790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.154991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.155367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.155772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.155952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.156262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.156442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.156467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.156638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.156810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.156835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.157019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.157389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.157797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.157992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.158172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.158309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.158334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.158493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.158637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.158662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.158851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.159255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.159617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.159818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.159983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.160353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.160755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.160948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.161098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.161252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.053 [2024-07-14 03:17:51.161277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.053 qpair failed and we were unable to recover it. 00:29:56.053 [2024-07-14 03:17:51.161439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.161668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.161694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.161972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.162146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.162172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.162463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.162643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.162669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.162844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.163232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.163589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.163763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.163953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.164307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.164688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.164908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.165111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.165278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.165304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.165488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.165694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.165719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.165885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.166241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.166639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.166838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.167027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.167350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.167710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.167917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.168061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.168240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.168265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.168455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.168600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.168628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.168809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.169245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.169595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.169809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.169974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.170156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.170182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.170333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.170516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.054 [2024-07-14 03:17:51.170543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.054 qpair failed and we were unable to recover it. 00:29:56.054 [2024-07-14 03:17:51.170723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.170903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.170930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.171078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.171223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.171249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.171431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.171593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.171618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.171783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.171972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.172003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.172177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.172341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.172366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.172605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.172776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.172802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.172992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.173169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.173194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.173374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.173627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.173653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.173816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.174249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.174587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.174826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.175003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.175404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.175764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.175962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.176115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.176285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.176314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.176496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.176646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.176673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.176825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.176995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.177020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.177169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.177346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.177372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.177517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.177787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.177812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.177991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.178321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.178666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.178899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.179056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.179239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.179264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.055 qpair failed and we were unable to recover it. 00:29:56.055 [2024-07-14 03:17:51.179429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.055 [2024-07-14 03:17:51.179625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.179655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.179834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.180195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.180580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.180753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.180895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.181264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.181626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.181810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.181996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.182364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.182713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.182907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.183068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.183417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.183769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.183974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.184153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.184311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.184338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.184519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.184692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.184717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.184894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.185253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.185621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.185831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.186009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.186403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.186743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.186909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.187078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.187282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.187307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.187461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.187643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.187670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.187828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.187988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.188016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.056 qpair failed and we were unable to recover it. 00:29:56.056 [2024-07-14 03:17:51.188218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.188430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.056 [2024-07-14 03:17:51.188456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.188614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.188785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.188810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.188962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.189326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.189693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.189914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.190091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.190448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.190805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.190986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.191159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.191311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.191336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.191534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.191701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.191726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.191880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.192237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.192580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.192779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.192962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.193309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.193646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.193841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.194025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.194409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.194807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.194984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.195135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.195285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.195310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.195486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.195639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.195664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.195809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.195984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.057 [2024-07-14 03:17:51.196009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.057 qpair failed and we were unable to recover it. 00:29:56.057 [2024-07-14 03:17:51.196150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.196328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.196354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.196522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.196684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.196712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.196885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.197210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.197587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.197819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.197988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.198366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.198715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.198896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.199057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.199446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.199773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.199995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.200174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.200327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.200354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.200501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.200672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.200699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.200845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.201252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.201629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.201855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.202044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.202219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.202245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.202402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.202551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.202575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.058 qpair failed and we were unable to recover it. 00:29:56.058 [2024-07-14 03:17:51.202782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.058 [2024-07-14 03:17:51.202992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.203018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.203220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.203391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.203415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.203612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.203814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.203839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.204020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.204397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.204779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.204972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.205144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.205294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.205320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.205497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.205675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.205700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.205843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.206188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.206553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.206741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.206920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.207287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.207628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.207826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.207989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.208358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.208687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.208892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.209065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.209226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.209250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.209427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.209602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.209628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.209805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.209976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.210002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.210187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.210361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.210386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.210564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.210705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.210730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.210916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.211088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.211112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.211277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.211448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.059 [2024-07-14 03:17:51.211474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.059 qpair failed and we were unable to recover it. 00:29:56.059 [2024-07-14 03:17:51.211613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.211789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.211815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.211976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.212338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.212712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.212915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.213088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.213404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.213747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.213940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.214109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.214301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.214326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.214495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.214651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.214675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.214825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.215189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.215521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.215695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.215861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.216220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.216611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.216785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.216934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.217273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.217627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.217823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.217977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.218326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.218682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.218856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.219041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.219403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.219756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.219945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.220149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.220326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.220352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.060 qpair failed and we were unable to recover it. 00:29:56.060 [2024-07-14 03:17:51.220515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.060 [2024-07-14 03:17:51.220662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.220686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.220839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.221211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.221570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.221770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.221920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.222312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.222665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.222898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.223045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.223397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.223755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.223964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.224121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.224283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.224309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.224488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.224659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.224684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.224860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.225241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.225588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.225792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.225947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.226336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.226738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.226946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.227112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.227298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.227323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.227473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.227619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.227644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.227821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.227971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.228014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.228172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.228319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.228346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.228527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.228678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.228705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.061 qpair failed and we were unable to recover it. 00:29:56.061 [2024-07-14 03:17:51.228856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.229021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.061 [2024-07-14 03:17:51.229047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.229196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.229334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.229359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.229525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.229675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.229700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.229846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.230207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.230608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.230808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.230980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.231365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.231740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.231928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.232101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.232306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.232332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.232483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.232655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.232681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.232853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.233218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.233570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.233765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.233947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.234281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.234638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.234814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.234995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.235321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.235687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.235855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.236019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.236338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.236667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.236836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.237021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.237199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.237224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.062 qpair failed and we were unable to recover it. 00:29:56.062 [2024-07-14 03:17:51.237380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.237568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.062 [2024-07-14 03:17:51.237594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.237768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.237920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.237950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.238114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.238267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.238293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.238470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.238649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.238676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.238843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.238996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.239023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.239183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.239331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.239357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.239508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.239654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.239681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.239858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.240231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.240614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.240814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.240984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.241329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.241680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.241852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.242036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.242355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.242694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.242882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.243029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.243396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.243766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.243944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.244092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.244245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.244271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.244472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.244651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.244675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.063 qpair failed and we were unable to recover it. 00:29:56.063 [2024-07-14 03:17:51.244854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.245001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.063 [2024-07-14 03:17:51.245027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.245181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.245342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.245368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.245524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.245668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.245693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.245895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.246236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.246560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.246743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.246911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.247284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.247657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.247853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.248007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.248383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.248756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.248934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.249084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.249256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.249281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.249465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.249668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.249694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.249889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.250292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.250639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.250831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.251014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.251361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.251698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.251901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.252071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.252209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.252234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.064 [2024-07-14 03:17:51.252417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.252570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.064 [2024-07-14 03:17:51.252594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.064 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.252771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.252955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.252982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.253149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.253315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.253341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.253515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.253693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.253718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.253859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.254239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.254554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.254773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.254950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.255338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.255686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.255896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.256102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.256446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.256806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.256975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.257144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.257296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.257320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.257488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.257662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.257688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.257895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.258231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.258572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.258778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.258947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.259284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.259639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.259844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.260022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.260357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.260757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.260952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.261138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.261315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.261342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.065 qpair failed and we were unable to recover it. 00:29:56.065 [2024-07-14 03:17:51.261517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.261689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.065 [2024-07-14 03:17:51.261714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.261887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.262232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.262599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.262771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.262926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.263320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.263667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.263835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.264034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.264354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.264731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.264905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.265047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.265216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.265241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.265408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.265592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.265617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.265820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.265996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.266021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.266174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.266316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.266342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.266507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.266666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.266692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.266878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.267231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.267552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.267750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.267898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.268221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.268541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.268719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.268911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.269252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.269575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.066 [2024-07-14 03:17:51.269783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.066 qpair failed and we were unable to recover it. 00:29:56.066 [2024-07-14 03:17:51.269958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.270279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.270602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.270775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.270928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.271287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.271661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.271863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.272044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.272414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.272769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.272999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.273159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.273303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.273329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.273484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.273663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.273689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.273841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.274196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.274565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.274774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.274927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.275271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.275618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.275824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.275983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.276158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.276183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.067 [2024-07-14 03:17:51.276322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.276498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.067 [2024-07-14 03:17:51.276524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.067 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.276668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.276813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.276840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.276996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.277368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.277746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.277932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.278072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.278460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.278798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.278998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.279149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.279325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.279351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.279497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.279645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.279672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.279851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.280174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.280572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.280741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.280894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.281251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.281622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.281793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.281946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.282123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.282148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.282342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.282510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.282536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.344 qpair failed and we were unable to recover it. 00:29:56.344 [2024-07-14 03:17:51.282714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.344 [2024-07-14 03:17:51.282894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.282920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.283127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.283297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.283323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.283498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.283679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.283706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.283848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.284216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.284567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.284775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.284986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.285356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.285730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.285904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.286062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.286434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.286777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.286983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.287156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.287332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.287357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.287503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.287671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.287696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.287873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.288188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.288527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.288717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.288889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.289277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.289602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.289786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.289928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.290293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.290625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.290853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.291032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.291355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.291696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.291903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.292080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.292449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.292769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.292992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.293197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.293364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.293389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.345 [2024-07-14 03:17:51.293542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.293716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.345 [2024-07-14 03:17:51.293743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.345 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.293896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.294224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.294573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.294736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.294942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.295281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.295653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.295825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.296011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.296356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.296709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.296944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.297120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.297292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.297317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.297493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.297681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.297706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.297847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.298250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.298614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.298777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.298944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.299318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.299667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.299878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.300059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.300375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.300714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.300911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.301074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.301447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.301781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.301978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.302128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.302325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.302351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.302529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.302674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.302698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.302886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.303258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.303591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.303784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.303929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.304266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.346 qpair failed and we were unable to recover it. 00:29:56.346 [2024-07-14 03:17:51.304615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.346 [2024-07-14 03:17:51.304794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.304955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.305309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.305632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.305805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.305977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.306293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.306622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.306821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.306992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.307349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.307688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.307915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.308058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.308458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.308776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.308962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.309141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.309315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.309340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.309488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.309629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.309655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.309805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.309989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.310015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.310177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.310324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.310351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.310519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.310667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.310693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.310854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.311229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.311578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.311741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.311945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.312295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.312659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.312872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.313030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.313378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.313748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.313945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.314116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.314291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.314317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.314457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.314617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.314641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.314814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.314993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.315019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.315191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.315369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.315394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.347 qpair failed and we were unable to recover it. 00:29:56.347 [2024-07-14 03:17:51.315553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.347 [2024-07-14 03:17:51.315734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.315759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.315952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.316296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.316649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.316828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.317022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.317389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.317708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.317944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.318146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.318302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.318328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.318523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.318667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.318693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.318846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.319224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.319572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.319777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.319923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.320297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.320650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.320835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.321000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.321334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.321719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.321906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.322113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.322261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.322287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.322432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.322595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.322620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.322801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.322981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.323007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.323154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.323311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.323339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.323505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.323694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.323719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.323875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.324072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.324097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f4af4000b90 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.324241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1627dc0 is same with the state(5) to be set 00:29:56.348 [2024-07-14 03:17:51.324516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.324678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.324705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.348 qpair failed and we were unable to recover it. 00:29:56.348 [2024-07-14 03:17:51.324853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.348 [2024-07-14 03:17:51.325038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.325064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.325253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.325406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.325432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.325626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.325775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.325799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.325972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.326327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.326637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.326835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.326986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.327305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.327617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.327815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.327985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.328317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.328636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.328801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.328953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.329319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.329634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.329799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.329969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.330328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.330671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.330862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.331043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.331379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.331716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.331907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.332118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.332449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.332765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.332953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.333096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.333446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.333774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.333975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.334111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.334444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.334786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.334983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.335145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.335319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.349 [2024-07-14 03:17:51.335343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.349 qpair failed and we were unable to recover it. 00:29:56.349 [2024-07-14 03:17:51.335483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.335657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.335681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.335822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.335995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.336167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.336478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.336812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.336997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.337146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.337289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.337314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.337484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.337630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.337654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.337800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.337992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.338162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.338498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.338814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.338977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.339152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.339478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.339811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.339984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.340159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.340297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.340322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.340472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.340662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.340686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.340882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.341203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.341527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.341690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.341872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.342233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.342607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.342771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.342931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.343306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.343623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.343789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.343957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.344272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.344649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.344840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.344997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.345342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.345654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.350 [2024-07-14 03:17:51.345816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.350 qpair failed and we were unable to recover it. 00:29:56.350 [2024-07-14 03:17:51.345958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.346295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.346676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.346836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.347001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.347369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.347705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.347872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.351 qpair failed and we were unable to recover it. 00:29:56.351 [2024-07-14 03:17:51.348026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.351 [2024-07-14 03:17:51.348217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.348241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.348412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.348549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.348573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.348753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.348897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.348922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.349077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.349410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.349743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.349966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.350129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.350280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.350304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.350492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.350633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.350657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.350809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.350986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.351011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.351157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.351316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.351341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.351512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.351680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.351704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.351845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.352237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.352612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.352780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.352932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.353281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.353647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.353840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.354007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.354330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.354667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.354872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.355047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.355360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.355703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.355897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.356082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.356419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.373 qpair failed and we were unable to recover it. 00:29:56.373 [2024-07-14 03:17:51.356741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.373 [2024-07-14 03:17:51.356938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.357111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.357268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.357293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.357472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.357647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.357672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.357838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.358208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.358566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.358745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.358897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.359221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.359582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.359752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.359902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.360231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.360549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.360739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.360912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.361256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.361624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.361788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.361968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.362279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.362627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.362849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.363004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.363365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.363683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.363844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.364021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.364371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.364678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.364860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.365035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.365346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.365678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.365872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.366048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.366219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.366244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.374 qpair failed and we were unable to recover it. 00:29:56.374 [2024-07-14 03:17:51.366393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.374 [2024-07-14 03:17:51.366561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.366586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.366737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.366907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.366932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.367103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.367292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.367316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.367487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.367691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.367715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.367890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.368246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.368555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.368729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.368863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.369212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.369531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.369734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.369893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.370234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.370603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.370785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.370933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.371267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.371611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.371782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.371924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.372250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.372611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.372808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.373006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.373393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.373766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.373960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.374097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.374402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.374742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.374938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.375111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.375452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.375789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.375968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.376128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.376327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.376351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.375 qpair failed and we were unable to recover it. 00:29:56.375 [2024-07-14 03:17:51.376511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.375 [2024-07-14 03:17:51.376656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.376681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.376823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.376968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.376998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.377148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.377288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.377312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.377474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.377651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.377675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.377845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.378188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.378501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.378815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.378994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.379179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.379486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.379852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.379996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.380021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.380208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.380350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.380374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.380541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.380687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.380712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.380882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.381250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.381612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.381774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.381968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.382299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.382635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.382799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.382974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.383339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.383678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.383852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.384058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.384422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.384758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.384926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.385071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.385422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.385785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.385964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.386160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.386329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.386354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.386516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.386688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.386712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.386903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.387052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.387076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.376 qpair failed and we were unable to recover it. 00:29:56.376 [2024-07-14 03:17:51.387251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.376 [2024-07-14 03:17:51.387402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.387427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.387573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.387738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.387762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.387939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.388311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.388648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.388817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.388993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.389321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.389666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.389834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.389990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.390325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.390631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.390845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.391019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.391335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.391711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.391883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.392055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.392394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.392724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.392905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.393069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.393236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.393261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.393464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.393622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.393647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.393824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.393982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.394157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.394500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.394815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.394993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.395154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.395309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.395333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.395510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.395646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.395671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.395852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.396176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.396511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.396704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.396845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.397198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.397525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.397690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.377 qpair failed and we were unable to recover it. 00:29:56.377 [2024-07-14 03:17:51.397859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.377 [2024-07-14 03:17:51.398033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.398058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.398213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.398356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.398381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.398557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.398697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.398722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.398882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.399209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.399550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.399712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.399884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.400206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.400547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.400713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.400890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.401206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.401575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.401745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.401908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.402252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.402560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.402736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.402905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.403232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.403563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.403749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.403953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.404264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.404605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.404767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.404918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.405286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.405633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.405811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.405986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.406329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.406667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.406862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.407010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.407347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.407697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.407872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.408046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.408205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.378 [2024-07-14 03:17:51.408230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.378 qpair failed and we were unable to recover it. 00:29:56.378 [2024-07-14 03:17:51.408397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.408573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.408597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.408743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.408916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.408942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.409081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.409416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.409798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.409976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.410145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.410291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.410316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.410491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.410680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.410705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.410864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.411186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.411539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.411706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.411890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.412237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.412587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.412777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.412920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.413283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.413673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.413842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.413997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.414374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.414735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.414933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.415092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.415431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.415775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.415941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.416104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.416264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.416288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.416478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.416655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.416679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.416851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.417220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.417584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.417745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.379 qpair failed and we were unable to recover it. 00:29:56.379 [2024-07-14 03:17:51.417907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.379 [2024-07-14 03:17:51.418094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.418119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.418275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.418459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.418484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.418630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.418791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.418816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.418965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.419337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.419700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.419898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.420065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.420420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.420774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.420937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.421189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.421331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.421355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.421519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.421676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.421700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.421876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.422214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.422567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.422788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.422942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.423250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.423585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.423758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.423911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.424236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.424578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.424791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.425003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.425367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.425692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.425902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.426048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.426365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.426695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.426904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.427048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.427395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.427697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.427902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.428075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.428223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.428248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.428397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.428567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.428592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.380 qpair failed and we were unable to recover it. 00:29:56.380 [2024-07-14 03:17:51.428754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.380 [2024-07-14 03:17:51.428929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.428955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.429098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.429465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.429773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.429972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.430140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.430291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.430317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.430496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.430647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.430671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.430817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.431181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.431504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.431826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.431996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.432138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.432311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.432336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.432503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.432644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.432669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.432845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.433193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.433528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.433695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.433835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.434185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.434533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.434699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.434861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.435260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.435618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.435786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.435939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.436295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.436672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.436849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.437010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.437344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.437687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.437913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.438060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.438406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.438735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.438941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.439086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.439259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.439283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.381 qpair failed and we were unable to recover it. 00:29:56.381 [2024-07-14 03:17:51.439432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.381 [2024-07-14 03:17:51.439607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.439632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.439782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.439954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.439979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.440148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.440456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.440828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.440993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.441141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.441457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.441781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.441943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.442087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.442422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.442738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.442933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.443087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.443229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.443254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.443451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.443703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.443728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.443903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.444249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.444615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.444816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.444959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.445312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.445676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.445842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.446014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.446333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.446682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.446882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.447029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.447374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.447804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.447999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.448143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.448303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.448328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.448473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.448669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.448693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.448831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.448999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.449024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.449183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.449329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.449354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.449524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.449693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.449717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.449870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.450124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.450148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.382 qpair failed and we were unable to recover it. 00:29:56.382 [2024-07-14 03:17:51.450324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.382 [2024-07-14 03:17:51.450496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.450520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.450775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.450925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.450950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.451105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.451487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.451808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.451973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.452129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.452445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.452807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.452983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.453156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.453409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.453433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.453591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.453758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.453783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.453956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.454407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.454759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.454954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.455111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.455453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.455782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.455961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.456117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.456286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.456310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.456453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.456622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.456647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.456815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.456988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.457013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.457268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.457435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.457460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.457608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.457859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.457890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.458082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.458408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.458753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.458930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.459098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.459466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.459815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.459985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.383 qpair failed and we were unable to recover it. 00:29:56.383 [2024-07-14 03:17:51.460185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.460329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.383 [2024-07-14 03:17:51.460355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.460502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.460645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.460670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.460829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.461174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.461532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.461838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.461999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.462024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.462179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.462322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.462346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.462509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.462669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.462693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.462876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.463214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.463614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.463783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.463934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.464250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.464576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.464772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.464913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.465276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.465646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.465820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.465981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.466347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.466746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.466919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.467072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.467380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.467734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.467904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.468074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.468436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.468794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.468971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.469137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.469292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.469320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.469498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.469669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.469693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.469873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.470185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.470524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.470750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.384 qpair failed and we were unable to recover it. 00:29:56.384 [2024-07-14 03:17:51.470899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.471068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.384 [2024-07-14 03:17:51.471093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.471245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.471392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.471416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.471585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.471732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.471756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.471963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.472333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.472687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.472851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.473042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.473355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.473673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.473882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.474044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.474189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.474214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.474395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.474647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.474672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.474824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.474997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.475181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.475515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.475816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.475998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.476023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.476175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.476330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.476357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.476531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.476701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.476725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.476872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.477208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.477537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.477705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.477875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.478286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.478606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.478769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.478916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.479286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.479649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.479812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.479959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.480268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.480619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.480786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.480938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.481294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-07-14 03:17:51.481641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.385 [2024-07-14 03:17:51.481920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.482075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.482253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.482277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.482422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.482586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.482610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.482748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.483181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.483515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.483685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.483861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.484214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.484552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.484750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.484890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.485210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.485557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.485751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.485920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.486255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.486602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.486770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.486918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.487347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.487793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.487988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.488130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.488298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.488322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.488466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.488640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.488664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.488804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.488992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.489017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.489179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.489383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.489407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.489583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.489756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.489780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.489934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.490255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.490609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.490781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.490956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.491260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.491614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.491787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.491924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.492242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-07-14 03:17:51.492575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.386 [2024-07-14 03:17:51.492772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.492915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.493260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.493626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.493814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.493991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.494331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.494672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.494870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.495041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.495393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.495722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.495924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.496066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.496408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.496743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.496915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.497100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.497427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.497808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.497976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.498132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.498506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.498823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.498998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.499180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.499352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.499376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.499563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.499710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.499735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.499902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.500203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.500556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.500769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.500919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.501302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.501661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.501826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.502029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.502217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.387 [2024-07-14 03:17:51.502241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-07-14 03:17:51.502454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.502646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.502671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.502832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.503185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.503547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.503713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.503856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.504224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.504564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.504732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.504883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.505279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.505629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.505825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.505974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.506285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.506593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.506762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.506937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.507292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.507666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.507858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.508005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.508324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.508683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.508873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.509021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.509404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.509748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.509924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.510088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.510412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.510752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.510922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.511074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.511391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.511745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.511916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.512071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.512404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.512739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.512915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.388 qpair failed and we were unable to recover it. 00:29:56.388 [2024-07-14 03:17:51.513092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.388 [2024-07-14 03:17:51.513231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.513256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.513424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.513596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.513620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.513780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.513960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.513986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.514136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.514307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.514331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.514510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.514687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.514712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.514884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.515234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.515600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.515772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.515924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.516308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.516692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.516894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.517045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.517402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.517741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.517933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.518086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.518422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.518765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.518993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.519168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.519341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.519365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.519524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.519670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.519694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.519845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.520190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.520572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.520766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.520936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.521305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.521661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.521827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.522003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.522331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.522677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.522870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.523014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.523399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.389 [2024-07-14 03:17:51.523742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.389 [2024-07-14 03:17:51.523955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.389 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.524101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.524416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.524761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.524978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.525152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.525485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.525798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.525993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.526150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.526324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.526353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.526530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.526698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.526723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.526878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.527257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.527609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.527802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.527950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.528268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.528609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.528771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.528929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.529265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.529616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.529818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.529976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.530333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.530675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.530883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.531029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.531354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.531710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.531906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.532065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.532472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.532804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.532972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.533124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.533270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.533295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.533491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.533655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.533679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.533850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.534202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.390 [2024-07-14 03:17:51.534543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.390 [2024-07-14 03:17:51.534739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.390 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.534916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.535263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.535581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.535748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.535926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.536300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.536627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.536820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.537013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.537362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.537712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.537908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.538060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.538375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.538716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.538920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.539072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.539384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.539720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.539905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.540071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.540402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.540720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.540915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.541060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.541421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.541783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.541953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.542130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.542301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.542325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.542502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.542651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.542675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.542851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.543162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.543537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.543721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.543881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.544252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.544614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.544780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.544928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.545077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.545101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.545247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.545392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.391 [2024-07-14 03:17:51.545416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.391 qpair failed and we were unable to recover it. 00:29:56.391 [2024-07-14 03:17:51.545557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.545743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.545768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.545911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.546247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.546622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.546793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.546952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.547295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.547655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.547845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.548004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.548336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.548640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.548832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.548990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.549320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.549650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.549839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.550012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.550447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.550817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.550983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.551163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.551336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.551360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.551531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.551707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.551732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.551877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.552202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.552519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.552830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.552981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.553007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.553178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.553322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.553346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.553506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.553677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.553702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.553878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.554210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.554560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.392 [2024-07-14 03:17:51.554755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.392 qpair failed and we were unable to recover it. 00:29:56.392 [2024-07-14 03:17:51.554906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.555246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.555669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.555828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.555987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.556335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.556691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.556893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.557062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.557381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.557717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.557884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.558050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.558368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.558727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.558896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.559067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.559240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.559265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.559441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.559625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.559651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.559811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.559978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.560004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.560145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.560325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.560350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.560494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.560658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.560683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.560823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.561187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.561533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.561730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.561880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.562182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.562510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.562703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.562841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.563184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.563506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.563702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.563858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.564196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.564562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.564767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.564911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.565089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.565117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.565293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.565430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.393 [2024-07-14 03:17:51.565454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.393 qpair failed and we were unable to recover it. 00:29:56.393 [2024-07-14 03:17:51.565619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.565803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.565828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.566056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.566420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.566739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.566928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.567070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.567411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.567769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.567970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.568126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.568300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.568325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.568486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.568659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.568683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.568853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.569207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.569542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.569704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.569847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.570200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.570506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.570711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.570852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.571190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.571525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.571848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.571996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.572169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.572474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.572784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.572954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.573111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.573439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.573799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.573968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.574137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.574274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.574298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.574472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.574644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.574669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.574850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.575257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.575620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.575824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.575974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.576242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.394 [2024-07-14 03:17:51.576270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.394 qpair failed and we were unable to recover it. 00:29:56.394 [2024-07-14 03:17:51.576473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.576654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.576685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.395 [2024-07-14 03:17:51.576855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.395 [2024-07-14 03:17:51.577261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.395 [2024-07-14 03:17:51.577649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.577844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.395 [2024-07-14 03:17:51.578014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.578267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.578300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.395 [2024-07-14 03:17:51.578478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.578664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.395 [2024-07-14 03:17:51.578691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.395 qpair failed and we were unable to recover it. 00:29:56.671 [2024-07-14 03:17:51.578883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-07-14 03:17:51.579137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-07-14 03:17:51.579162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-07-14 03:17:51.579310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-07-14 03:17:51.579491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.671 [2024-07-14 03:17:51.579515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.671 qpair failed and we were unable to recover it. 00:29:56.671 [2024-07-14 03:17:51.579659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.579827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.579852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.580073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.580240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.580266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.580431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.580633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.580658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.580829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.580978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.581169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.581507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.581833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.581989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.582026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.582199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.582361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.582397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.582573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.582757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.582790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.582952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.583347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.583709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.583915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.584073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.584258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.584284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.584425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.584593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.584617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.584792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.584988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.585015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.585186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.585330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.585356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.585514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.585689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.585715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.585971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.586428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.586776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.586980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.587138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.587318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.587343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.587497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.587667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.587694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.587845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.588210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.588556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.588736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.588913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.589262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.589584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.589787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.589940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.590138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.590164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.590306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.590454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.672 [2024-07-14 03:17:51.590479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.672 qpair failed and we were unable to recover it. 00:29:56.672 [2024-07-14 03:17:51.590657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.590820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.590855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.591039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.591184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.591211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.591389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.591558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.591585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.591744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.592191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.592503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.592676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.592935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.593261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.593687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.593860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.594034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.594369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.594732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.594908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.595179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.595349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.595375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.595537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.595692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.595717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.595854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.596259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.596629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.596815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.596975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.597344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.597665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.597877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.598033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.598412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.598811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.598981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.599137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.599284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.599309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.599448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.599632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.599657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.599833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.600221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.600562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.600777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.600948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.601101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.601127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.601276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.601427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.601453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.673 qpair failed and we were unable to recover it. 00:29:56.673 [2024-07-14 03:17:51.601604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.673 [2024-07-14 03:17:51.601786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.601812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.601956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.602320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.602670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.602849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.603016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.603379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.603738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.603908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.604055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.604421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.604809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.604987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.605156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.605339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.605365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.605510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.605645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.605674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.605841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.605995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.606021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.606167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.606422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.606447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.606599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.606751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.606775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.606969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.607298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.607642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.607806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.607961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.608274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.608600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.608821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.608965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.609359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.609736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.609942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.610142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.610312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.610337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.610508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.610664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.610689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.610837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.610990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.611022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.611188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.611359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.611383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.611538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.611705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.611729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.611906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.612052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.612077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.612253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.612395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.612420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.674 qpair failed and we were unable to recover it. 00:29:56.674 [2024-07-14 03:17:51.612568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.674 [2024-07-14 03:17:51.612714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.612740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.612888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.613214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.613556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.613750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.613939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.614283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.614635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.614811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.614982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.615322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.615684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.615883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.616051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.616384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.616728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.616926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.617115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.617419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.617740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.617937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.618106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.618428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.618771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.618976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.619166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.619344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.619368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.619544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.619685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.619710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.619890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.620284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.620615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.620829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.620977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.621328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.621640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.621836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.622006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.622393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.622735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.622937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.623117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.623275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.623299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.675 qpair failed and we were unable to recover it. 00:29:56.675 [2024-07-14 03:17:51.623463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.675 [2024-07-14 03:17:51.623603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.623631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.623793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.623948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.623974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.624158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.624308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.624332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.624522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.624661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.624685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.624849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.625268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.625613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.625776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.625936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.626284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.626598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.626793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.626986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.627328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.627658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.627830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.627975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.628366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.628678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.628870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.629017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.629323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.629641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.629877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.630027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.630337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.630676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.630881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.631060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.631399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.631770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.631968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.676 qpair failed and we were unable to recover it. 00:29:56.676 [2024-07-14 03:17:51.632111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.676 [2024-07-14 03:17:51.632251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.632276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.632434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.632610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.632635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.632788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.632956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.632982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.633142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.633445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.633778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.633971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.634118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.634443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.634782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.634964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.635141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.635280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.635305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.635462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.635630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.635655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.635838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.636211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.636580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.636754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.636923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.637263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.637596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.637803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.637953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.638309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.638669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.638833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.638997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.639332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.639722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.639914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.640056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.640394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.640770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.640956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.641122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.641478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.641815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.641982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.642132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.642276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.642300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.642437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.642580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.642605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.677 qpair failed and we were unable to recover it. 00:29:56.677 [2024-07-14 03:17:51.642792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.677 [2024-07-14 03:17:51.642938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.642963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.643115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.643283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.643308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.643468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.643645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.643669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.643829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.643979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.644153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.644464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.644806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.644974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.645000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.645151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.645347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.645371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.645536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.645697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.645722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.645915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.646305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.646620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.646818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.646973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.647315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.647668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.647855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.648040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.648393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.648706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.648886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.649070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.649412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.649780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.649989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.650139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.650281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.650308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.650473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.650668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.650694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.650839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.650992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.651017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.651175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.651366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.651394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.651553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.651721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.651746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.651892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.652215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.652578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.652774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.652950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.653107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.653132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.653297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.653498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.653523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.678 qpair failed and we were unable to recover it. 00:29:56.678 [2024-07-14 03:17:51.653666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.678 [2024-07-14 03:17:51.653814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.653838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.654000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.654334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.654662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.654829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.654988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.655360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.655677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.655853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.656010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.656386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.656690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.656888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.657032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.657376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.657733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.657915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.658090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.658435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.658781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.658954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.659102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.659404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.659710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.659882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.660023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.660374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.660703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.660904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.661058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.661407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.661767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.661975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.662139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.662484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.662793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.662982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.663130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.663303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.663327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.663469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.663632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.663656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.663828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.664002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.664027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.679 qpair failed and we were unable to recover it. 00:29:56.679 [2024-07-14 03:17:51.664186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.679 [2024-07-14 03:17:51.664386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.664411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.664592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.664743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.664767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.664955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.665286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.665632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.665842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.665999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.666322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.666628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.666798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.666944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.667252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.667562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.667787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.667930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.668272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.668654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.668813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.668970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.669317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.669645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.669885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.670065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.670409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.670757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.670953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.671104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.671431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.671785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.671958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.672108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.672453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.672809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.672988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.673140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.673310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.673335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.680 qpair failed and we were unable to recover it. 00:29:56.680 [2024-07-14 03:17:51.673474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.680 [2024-07-14 03:17:51.673631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.673656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.673802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.673960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.673985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.674139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.674484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.674808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.674985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.675140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.675317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.675344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.675517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.675661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.675686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.675879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.676261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.676585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.676762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.676936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.677250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.677591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.677791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.677984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.678299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.678668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.678861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.679008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.679339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.679678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.679882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.680031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.680393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.680717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.680898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.681099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.681410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.681778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.681951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.682096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.682451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.682762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.682941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.683093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.683433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.683751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.681 [2024-07-14 03:17:51.683921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.681 qpair failed and we were unable to recover it. 00:29:56.681 [2024-07-14 03:17:51.684099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.684431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.684799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.684969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.685140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.685307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.685332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.685474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.685614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.685638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.685810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.685989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.686162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.686495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.686817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.686992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.687018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.687207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.687355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.687380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.687549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.687696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.687722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.687882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.688288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.688604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.688820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.688969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.689276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.689615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.689797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.689972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.690286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.690634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.690824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.690981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.691329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.691685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.691892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.692049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.692397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.692802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.692998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.693180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.693318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.693342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.693486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.693653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.693678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.693851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.694226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.682 [2024-07-14 03:17:51.694594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.682 [2024-07-14 03:17:51.694767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.682 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.694914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.695239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.695565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.695730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.695873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.696229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.696588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.696785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.696928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.697279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.697622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.697819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.697993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.698332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.698707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.698918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.699067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.699402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.699787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.699986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.700133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.700304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.700328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.700500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.700637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.700661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.700805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.700975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.701157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.701489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.701804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.701980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.702159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.702469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.702825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.702991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.703186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.703327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.703351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.703522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.703673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.703700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.703878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.704208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.704558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.704770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.704916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.705096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.705121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.683 [2024-07-14 03:17:51.705290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.705434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.683 [2024-07-14 03:17:51.705459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.683 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.705665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.705821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.705846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.706037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.706385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.706710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.706884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.707048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.707352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.707694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.707898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.708066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.708402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.708768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.708974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.709146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.709288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.709312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.709513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.709669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.709694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.710217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.710529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.710695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.710841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.711222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.711601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.711767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.711938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.712272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.712670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.712854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.713032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.713400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.713768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.713942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.714112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.714261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.714286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.714447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.714600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.714624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.684 qpair failed and we were unable to recover it. 00:29:56.684 [2024-07-14 03:17:51.714798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.684 [2024-07-14 03:17:51.714978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.715005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.715147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.715317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.715348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.715520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.715690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.715715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.715885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.716207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.716566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.716739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.716879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.717224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.717562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.717785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.717929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.718295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.718652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.718823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.718976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.719345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.719668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.719834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.720016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.720395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.720748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.720911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.721062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.721381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.721750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.721928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.722068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.722410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.722788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.722960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.723112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.723452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.723818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.723994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.724136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.724319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.724344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.724546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.724684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.724709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.724855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.724999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.725024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.725161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.725304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.725329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.685 qpair failed and we were unable to recover it. 00:29:56.685 [2024-07-14 03:17:51.725492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.685 [2024-07-14 03:17:51.725639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.725664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.725863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.726204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.726544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.726715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.726916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.727238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.727591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.727778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.727945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.728315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.728637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.728821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.728962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.729307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.729677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.729838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.729988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.730329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.730668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.730834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.730984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.731327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.731645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.731849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.732051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.732396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.732756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.732932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.733114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.733319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.733344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.733487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.733625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.733650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.733820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.733997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.734022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.734196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.734364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.734389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.734541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.734740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.734764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.734905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.735227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.735558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.735739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.735906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.736064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.736089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.686 qpair failed and we were unable to recover it. 00:29:56.686 [2024-07-14 03:17:51.736233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.686 [2024-07-14 03:17:51.736393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.736423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.736581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.736754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.736778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.736937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.737283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.737598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.737781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.737923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.738273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.738625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.738808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.738982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.739322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.739667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.739835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.739994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.740319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.740648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.740813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.740977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.741351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.741695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.741900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.742074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.742412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.742775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.742971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.743121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.743266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.743292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.743476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.743617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.743641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.743844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.743993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.744018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.744195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.744370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.744395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.744564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.744743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.744768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.744945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.745284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.745617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.745844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.746023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.746339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.746677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.746852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.687 qpair failed and we were unable to recover it. 00:29:56.687 [2024-07-14 03:17:51.747034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.687 [2024-07-14 03:17:51.747187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.747212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.747354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.747527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.747552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.747705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.747881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.747907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.748089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.748264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.748288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.748476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.748650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.748675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.748841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.748999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.749025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.749167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.749339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.749364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.749501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.749640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.749665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.749836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.750193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.750532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.750701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.750879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.751222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.751573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.751739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.751893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.752256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.752604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.752768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.752938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.753279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.753667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.753833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.754013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.754364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.754728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.754903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.755043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.755405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.755766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.755936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.756104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.756427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.756759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.756951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.757123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.757264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.757289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.757432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.757599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.757624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.688 qpair failed and we were unable to recover it. 00:29:56.688 [2024-07-14 03:17:51.757768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.688 [2024-07-14 03:17:51.757941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.757967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.758169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.758304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.758328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.758485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.758642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.758667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.758814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.759179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.759553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.759725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.759892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.760261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.760576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.760774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.760950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.761287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.761673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.761876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.762020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.762378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.762724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.762920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.763082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.763428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.763761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.763928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.764098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.764261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.764286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.764465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.764652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.764677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.764821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.764986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.765012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.765158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.765324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.765349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.765513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.765647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.765672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.765848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.766229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.766578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.766774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.766972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.767109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.767134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.689 qpair failed and we were unable to recover it. 00:29:56.689 [2024-07-14 03:17:51.767276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.689 [2024-07-14 03:17:51.767414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.767439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.767599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.767743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.767767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.767944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.768287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.768653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.768838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.769019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.769341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.769707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.769904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.770047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.770371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.770698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.770900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.771051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.771392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.771730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.771929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.772134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.772452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.772764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.772997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.773176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.773321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.773345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.773505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.773677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.773702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.773905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.774243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.774554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.774720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.774860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.775181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.775573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.775740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.775890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.776203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.776585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.776785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.776962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.777303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.777679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.777854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.690 qpair failed and we were unable to recover it. 00:29:56.690 [2024-07-14 03:17:51.778017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.690 [2024-07-14 03:17:51.778158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.778183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.778348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.778526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.778551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.778725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.778875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.778901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.779058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.779374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.779714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.779940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.780084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.780400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.780774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.780944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.781097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.781248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.781274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.781424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.781579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.781604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.781802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.781976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.782141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.782477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.782811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.782984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.783155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.783298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.783324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.783498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.783664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.783689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.783838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.783988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.784014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.784180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.784350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.784375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.784514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.784685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.784710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.784914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.785256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.785597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.785791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.785954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.786314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.786634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.786833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.786978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.787332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.787662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.787843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.787999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.788151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.788176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.788324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.788458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.788483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.691 qpair failed and we were unable to recover it. 00:29:56.691 [2024-07-14 03:17:51.788644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.691 [2024-07-14 03:17:51.788792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.788817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.788983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.789288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.789670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.789830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.789989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.790324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.790694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.790855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.791016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.791336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.791640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.791803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.791977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.792286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.792652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.792817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.792998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.793322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.793671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.793855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.794042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.794377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.794718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.794916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.795061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.795407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.795741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.795908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.796058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.796400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.796760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.796958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.797102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.797415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.797767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.797961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.798101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.798442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.692 [2024-07-14 03:17:51.798785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.692 [2024-07-14 03:17:51.798983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.692 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.799152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.799325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.799350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.799511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.799653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.799678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.799833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.800197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.800534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.800702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.800883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.801238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.801608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.801802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.801975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.802299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.802675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.802837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.803008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.803350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.803709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.803911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.804089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.804448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.804780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.804966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.805111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.805463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.805803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.805996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.806197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.806346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.806371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.806545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.806736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.806762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.806942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.807299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.807661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.807832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.807978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.808148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.808173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.693 [2024-07-14 03:17:51.808315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.808471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.693 [2024-07-14 03:17:51.808496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.693 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.808643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.808812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.808837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.809017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.809373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.809697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.809859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.810039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.810348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.810745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.810963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.811111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.811441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.811791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.811958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.812127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.812447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.812774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.812962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.813141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.813342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.813367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.813512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.813672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.813697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.813876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.814198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.814507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.814817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.814985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.815011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.815181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.815330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.815355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.815504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.815668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.815693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.815871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.816228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.816599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.816804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.816977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.817309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.817678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.817883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.818062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.818391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.818785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.818957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.694 qpair failed and we were unable to recover it. 00:29:56.694 [2024-07-14 03:17:51.819105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.694 [2024-07-14 03:17:51.819262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.819288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.819430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.819596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.819621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.819796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.819980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.820005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.820182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.820326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.820352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.820503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.820642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.820667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.820847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.821201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.821545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.821730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.821918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.822276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.822625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.822828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.822980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.823325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.823674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.823873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.824018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.824339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.824691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.824861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.825061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.825411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.825762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.825930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.826086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.826437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.826792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.826973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.827147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.827325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.827350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.827498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.827659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.827685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.827856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.828186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.828561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.828758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.828917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.829307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.829637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.829834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.695 qpair failed and we were unable to recover it. 00:29:56.695 [2024-07-14 03:17:51.829989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.695 [2024-07-14 03:17:51.830157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.830182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.830356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.830503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.830527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.830694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.830855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.830886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.831084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.831414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.831748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.831978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.832122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.832463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.832816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.832989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.833162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.833340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.833365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.833507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.833666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.833691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.833850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.834214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.834550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.834764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.834944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.835291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.835689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.835891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.836033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.836376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.836713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.836911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.837064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.837399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.837733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.837901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.838045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.838417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.838749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.838939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.839085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.839449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.839797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.839968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.840130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.840273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.840298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.840444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.840582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.840607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.696 qpair failed and we were unable to recover it. 00:29:56.696 [2024-07-14 03:17:51.840757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.696 [2024-07-14 03:17:51.840934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.840960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.841107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.841433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.841779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.841967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.842131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.842448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.842768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.842939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.843110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.843443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.843752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.843933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.844109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.844425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.844784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.844966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.845158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.845300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.845325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.845499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.845641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.845666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.845838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.845992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.846021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.846182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.846327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.846351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.846524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.846698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.846723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.846908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.847236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.847608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.847778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.847923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.848262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.848621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.848782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.848968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.849269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.697 [2024-07-14 03:17:51.849604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.697 [2024-07-14 03:17:51.849769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.697 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.849913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.850253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.850560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.850785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.850960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.851341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.851710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.851897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.852052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.852422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.852811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.852999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.853160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.853307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.853331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.853483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.853689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.853714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.853881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.854273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.854641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.854833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.854989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.855298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.855698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.855888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.856057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.856405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.856777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.856960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.857119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.857289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.857314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.857501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.857639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.857664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.857838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.858199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.858546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.858717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.858891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.859261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.859617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.859791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.859935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.860263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.860633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.698 [2024-07-14 03:17:51.860827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.698 qpair failed and we were unable to recover it. 00:29:56.698 [2024-07-14 03:17:51.860981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.861340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.861644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.861845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.862035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.862393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.862743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.862909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.863058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.863370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.863738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.863973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.864115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.864285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.864311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.864506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.864706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.864731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.864904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.865258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.865570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.865730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.865876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.866226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.866531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.866724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.866898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.867245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.867593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.867769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.867918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.868262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.868599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.868788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.868938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.869285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.869605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.869771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.869918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.870262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.870596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.870760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.870949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.871088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.871113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.871256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.871426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.699 [2024-07-14 03:17:51.871451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.699 qpair failed and we were unable to recover it. 00:29:56.699 [2024-07-14 03:17:51.871595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.871732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.871756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.871914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.872321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.872663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.872827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.872977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.873355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.873697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.873862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.874042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.874387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.874723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.874894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.875062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.875416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.875784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.875983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.876184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.876341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.876366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.876570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.876709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.876734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.876911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.877218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.877550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.877717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.877892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.878224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.878610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.878800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.878964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.879293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.879630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.879829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.880011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.880315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.880673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.880860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.881038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.881391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.881735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.881925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.882093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.882233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.700 [2024-07-14 03:17:51.882258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.700 qpair failed and we were unable to recover it. 00:29:56.700 [2024-07-14 03:17:51.882414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.882616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.882641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.882787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.882948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.882974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.883163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.883322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.883347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.883490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.883659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.883684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.883855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.884199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.884524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.884746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.884893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.885235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.885593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.885786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.885930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.886268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.886599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.886759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.886934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.887276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.887581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.887746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.887923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.888266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.888662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.888835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.888985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.889334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.889682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.889898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.890052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.890354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.890705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.890908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.891059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.891421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.891760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.891928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.701 qpair failed and we were unable to recover it. 00:29:56.701 [2024-07-14 03:17:51.892085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.892226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.701 [2024-07-14 03:17:51.892251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.892416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.892564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.892589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.892749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.892900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.892926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.893066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.893406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.893746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.893945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.894135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.894296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.894321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.894472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.894635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.894660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.894801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.894995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.895175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.895479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.895836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.895983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.896150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.896526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.896837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.896995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.897022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.897160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.897332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.897357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.897517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.897692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.897717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.897864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.898247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.898599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.898792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.898940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.899307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.899627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.899797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.899968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.900309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.900702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.900889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.901050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.901392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.901714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.901881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.902213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.902362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.702 [2024-07-14 03:17:51.902388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.702 qpair failed and we were unable to recover it. 00:29:56.702 [2024-07-14 03:17:51.902594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.902745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.902774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.902922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.903281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.903662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.903894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.904071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.904219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.904244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.904426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.904595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.904621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.904820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.905210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.905538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.905714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.905913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.906303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.906668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.906886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.907040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.907369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.907697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.703 [2024-07-14 03:17:51.907911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.703 qpair failed and we were unable to recover it. 00:29:56.703 [2024-07-14 03:17:51.908116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-07-14 03:17:51.908265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-07-14 03:17:51.908290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-07-14 03:17:51.908462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-07-14 03:17:51.908624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-07-14 03:17:51.908649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.967 qpair failed and we were unable to recover it. 00:29:56.967 [2024-07-14 03:17:51.908820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.967 [2024-07-14 03:17:51.908999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.909034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.909206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.909365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.909400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.909572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.909738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.909771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.909969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.910356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.910710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.910931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.911118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.911262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.911290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.911455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.911610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.911635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.911831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.911990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.912016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.912164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.912332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.912357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.912540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.912713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.912741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.912889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.913220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.913618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.913783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.913938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.914263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.914646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.914813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.914965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.915274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.915634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.915827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.915977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.916311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.916631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.916824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.916998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.917200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.917225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.917462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.917608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.917633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.917807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.917977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.918003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.918149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.918351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.918376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.918550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.918687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.918712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.918853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.919271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.919594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.968 [2024-07-14 03:17:51.919796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.968 qpair failed and we were unable to recover it. 00:29:56.968 [2024-07-14 03:17:51.919977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.920286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.920684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.920887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.921059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.921409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.921771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.921937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.922114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.922451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.922816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.922990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.923135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.923311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.923336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.923511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.923682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.923706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.923854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.924200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.924536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.924699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.924881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.925235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.925563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.925761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.925940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.926253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.926627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.926843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.927022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.927406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.927736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.927954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.928129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.928303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.928328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.928508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.928683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.928707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.928891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.929241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.929557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.929753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.929903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.930277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.930607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.969 [2024-07-14 03:17:51.930806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.969 qpair failed and we were unable to recover it. 00:29:56.969 [2024-07-14 03:17:51.930970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.931290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.931665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.931847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.932011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.932183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.932219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.932399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 03:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:56.970 [2024-07-14 03:17:51.932578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.932612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 03:17:51 -- common/autotest_common.sh@852 -- # return 0 00:29:56.970 [2024-07-14 03:17:51.932812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 03:17:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:56.970 [2024-07-14 03:17:51.932984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 03:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:56.970 [2024-07-14 03:17:51.933016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 03:17:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.970 [2024-07-14 03:17:51.933200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.933393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.933426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.933594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.933755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.933781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.933955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.934336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.934677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.934877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.935048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.935409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.935736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.935973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.936120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.936468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.936791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.936968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.937109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.937321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.937346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.937498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.937667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.937693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.937850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.938235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.938615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.938787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.938955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.939335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.939660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.939857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.940100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.940283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.940309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.940457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.940633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.940660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.940820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.941213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.970 qpair failed and we were unable to recover it. 00:29:56.970 [2024-07-14 03:17:51.941582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.970 [2024-07-14 03:17:51.941782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.941960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.942296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.942640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.942821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.943000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.943345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.943724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.943922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.944094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.944412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.944732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.944945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.945088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.945451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.945762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.945985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.946127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.946275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.946300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.946475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.946623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.946648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.946827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.947225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.947589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.947783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.947932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.948315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.948668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.948829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.948984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.949353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.949666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.949879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.950061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.950425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.950766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.950946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.951109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.951424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.951771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.951937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.952079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.952249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.952273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.971 qpair failed and we were unable to recover it. 00:29:56.971 [2024-07-14 03:17:51.952440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.952613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.971 [2024-07-14 03:17:51.952639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.952783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.952932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.952957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.953100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.953241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.953265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.953439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.953581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.953610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.953777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.953981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.954006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.954184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.954324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.954349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.954494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.954657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.954682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.954864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.955196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.955537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.955702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.955879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 03:17:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.972 [2024-07-14 03:17:51.956065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.956091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 03:17:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:56.972 [2024-07-14 03:17:51.956234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.956410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.956435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 03:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.972 [2024-07-14 03:17:51.956587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 03:17:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.972 [2024-07-14 03:17:51.956785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.956811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.956997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.957360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.957701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.957914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.958074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.958241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.958266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.958442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.958600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.958638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.958844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.959225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.959585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.959773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.959944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.960353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.960773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.960971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.972 qpair failed and we were unable to recover it. 00:29:56.972 [2024-07-14 03:17:51.961123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.961403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.972 [2024-07-14 03:17:51.961428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.961573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.961726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.961751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.961948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.962294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.962638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.962833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.962994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.963345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.963668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.963856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.964052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.964422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.964768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.964939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.965150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.965298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.965322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.965598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.965755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.965779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.965944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.966405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.966787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.966985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.967132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.967303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.967328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.967477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.967653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.967677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.967855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.968249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.968628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.968825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.969015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.969378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.969707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.969934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.970091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.970446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.970792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.970994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.971153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.971331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.971356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.971530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.971702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.971727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.971876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.972026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.972056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.972249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.972397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.972422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.973 qpair failed and we were unable to recover it. 00:29:56.973 [2024-07-14 03:17:51.972562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.973 [2024-07-14 03:17:51.972707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.972732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.972878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.973235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.973646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.973819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.973997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.974352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.974748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.974931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.975137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.975298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.975323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.975475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.975627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.975652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.975839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.976188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.976531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.976696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.976833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.977246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.977630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.977826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.977975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.978328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.978653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.978853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.979037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.979384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.979735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.979904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.980073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.980439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.980755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.980933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.981113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.981290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.981315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.981461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.981648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.981673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.981839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.981980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.982005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.982201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 Malloc0 00:29:56.974 [2024-07-14 03:17:51.982361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.982396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.982566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 03:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.974 [2024-07-14 03:17:51.982753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.982787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 [2024-07-14 03:17:51.982969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 03:17:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:56.974 03:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.974 [2024-07-14 03:17:51.983193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.983227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.974 qpair failed and we were unable to recover it. 00:29:56.974 03:17:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.974 [2024-07-14 03:17:51.983413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.983614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.974 [2024-07-14 03:17:51.983642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.983781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.983932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.983958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.984119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.984306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.984331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.984512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.984681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.984706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.984856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.985192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.985532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.985860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.985944] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.975 [2024-07-14 03:17:51.986054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.986078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.986293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.986466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.986495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.986647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.986831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.986856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.987016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.987403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.987756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.987924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.988065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.988249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.988274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.988463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.988665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.988690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.988839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.989191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.989540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.989744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.989922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.990268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.990585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.990786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.990962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.991311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.991661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.991854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.992032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.992395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.992738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.992955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.993099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.993264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.993290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.993438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.993609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.993635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 [2024-07-14 03:17:51.993830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.994003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.994042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.975 03:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.975 [2024-07-14 03:17:51.994217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 03:17:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:56.975 [2024-07-14 03:17:51.994407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.975 [2024-07-14 03:17:51.994442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.975 qpair failed and we were unable to recover it. 00:29:56.976 03:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.976 03:17:51 -- common/autotest_common.sh@10 -- # set +x 00:29:56.976 [2024-07-14 03:17:51.994667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.994840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.994877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.995026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.995372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.995691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.995858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.996002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.996366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.996730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.996922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.997099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.997250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.997277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.997459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.997605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.997632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.997785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.997989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.998017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.998164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.998314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.998341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.998518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.998691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.998718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.998863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.999227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.999624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:51.999823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:51.999976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:52.000354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:52.000730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.000920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:52.001096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.001236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.001263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:52.001446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.001589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.001615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.976 qpair failed and we were unable to recover it. 00:29:56.976 [2024-07-14 03:17:52.001783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.976 [2024-07-14 03:17:52.001947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.001984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 03:17:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.977 [2024-07-14 03:17:52.002214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 03:17:52 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:56.977 [2024-07-14 03:17:52.002404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.002441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 03:17:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.977 [2024-07-14 03:17:52.002643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 03:17:52 -- common/autotest_common.sh@10 -- # set +x 00:29:56.977 [2024-07-14 03:17:52.002811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.002841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.003027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.003358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.003716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.003947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.004094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.004260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.004286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.004457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.004659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.004685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.004871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.005267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.005610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.005805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.005978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.006319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.006665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.006862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.007023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.007378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.007725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.007916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.008075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.008252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.008278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.008475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.008650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.008677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.008851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.009245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.009574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.009763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.009915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.010121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.010158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 03:17:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.977 [2024-07-14 03:17:52.010379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 03:17:52 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:56.977 03:17:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.977 [2024-07-14 03:17:52.010545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.010581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 03:17:52 -- common/autotest_common.sh@10 -- # set +x 00:29:56.977 [2024-07-14 03:17:52.010751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.010967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.010997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.011160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.011336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.011363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.977 qpair failed and we were unable to recover it. 00:29:56.977 [2024-07-14 03:17:52.011500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.011670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.977 [2024-07-14 03:17:52.011701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.011853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.012254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.012592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.012791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.012932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.013246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.013607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.013810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.013977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.014132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.978 [2024-07-14 03:17:52.014159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x161a350 with addr=10.0.0.2, port=4420 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.014202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.978 [2024-07-14 03:17:52.016742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.016929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.016957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.016974] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.016989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.017025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 03:17:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.978 03:17:52 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:56.978 03:17:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:56.978 03:17:52 -- common/autotest_common.sh@10 -- # set +x 00:29:56.978 03:17:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:56.978 [2024-07-14 03:17:52.026587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 03:17:52 -- host/target_disconnect.sh@58 -- # wait 2131857 00:29:56.978 [2024-07-14 03:17:52.026753] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.026782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.026798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.026828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.026859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.036583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.036736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.036764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.036780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.036793] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.036822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.046597] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.046754] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.046781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.046796] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.046809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.046838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.056577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.056733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.056759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.056775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.056789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.056819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.066606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.066757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.066784] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.066799] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.066813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.066842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.076636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.076787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.076817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.076834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.076848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.076889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.086630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.086789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.086816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.086832] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.086846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.086882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.096697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.096884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.096912] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.096940] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.096953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.096982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.106721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.978 [2024-07-14 03:17:52.106876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.978 [2024-07-14 03:17:52.106914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.978 [2024-07-14 03:17:52.106934] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.978 [2024-07-14 03:17:52.106949] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.978 [2024-07-14 03:17:52.106979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.978 qpair failed and we were unable to recover it. 00:29:56.978 [2024-07-14 03:17:52.116751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.116913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.116940] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.116955] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.116969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.116998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.126758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.126920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.126945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.126960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.126973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.127003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.136880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.137047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.137073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.137088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.137101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.137131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.146852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.147026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.147052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.147067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.147081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.147110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.156902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.157067] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.157093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.157110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.157123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.157152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.166911] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.167073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.167099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.167113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.167127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.167157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.176889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.177046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.177073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.177088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.177101] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.177131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.186950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.187109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.187135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.187150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.187163] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.187193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.197051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.197218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.197244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.197265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.197279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.197323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:56.979 [2024-07-14 03:17:52.206995] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.979 [2024-07-14 03:17:52.207151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.979 [2024-07-14 03:17:52.207177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.979 [2024-07-14 03:17:52.207192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.979 [2024-07-14 03:17:52.207206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:56.979 [2024-07-14 03:17:52.207235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:56.979 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.217047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.217202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.217232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.217248] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.217263] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.217293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.227063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.227222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.227249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.227264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.227277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.227321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.237109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.237266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.237294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.237310] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.237325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.237371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.247098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.247259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.247285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.247300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.247315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.247344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.257123] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.257267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.257293] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.257313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.257326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.257355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.267164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.267335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.267362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.267380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.267395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.267424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.277177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.277332] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.277358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.277374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.277388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.277417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.287224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.287385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.287411] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.287433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.287448] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.287477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.297236] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.297409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.297435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.297450] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.297464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.297493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.307243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.307395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.307421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.307436] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.307450] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.307478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.317303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.317464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.317490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.317505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.317519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.317548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.327318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.239 [2024-07-14 03:17:52.327480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.239 [2024-07-14 03:17:52.327505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.239 [2024-07-14 03:17:52.327520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.239 [2024-07-14 03:17:52.327534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.239 [2024-07-14 03:17:52.327563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.239 qpair failed and we were unable to recover it. 00:29:57.239 [2024-07-14 03:17:52.337373] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.337526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.337552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.337568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.337581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.337610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.347380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.347545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.347571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.347585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.347599] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.347628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.357528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.357696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.357723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.357737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.357752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.357780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.367410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.367572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.367597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.367612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.367626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.367655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.377441] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.377598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.377624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.377644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.377659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.377688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.387468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.387630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.387656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.387670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.387684] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.387712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.397548] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.397721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.397748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.397768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.397783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.397828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.407540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.407698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.407724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.407738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.407752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.407781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.417557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.417718] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.417744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.417759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.417787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.417816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.427620] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.427792] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.427817] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.427833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.427847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.427885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.437673] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.437858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.437905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.437920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.437934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.437964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.447660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.447825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.447861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.447883] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.447896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.447926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.457677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.457837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.457863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.457889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.457903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.457933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.467690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.467849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.467886] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.240 [2024-07-14 03:17:52.467903] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.240 [2024-07-14 03:17:52.467918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.240 [2024-07-14 03:17:52.467947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.240 qpair failed and we were unable to recover it. 00:29:57.240 [2024-07-14 03:17:52.477729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.240 [2024-07-14 03:17:52.477904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.240 [2024-07-14 03:17:52.477930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.241 [2024-07-14 03:17:52.477945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.241 [2024-07-14 03:17:52.477959] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.241 [2024-07-14 03:17:52.477988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.241 qpair failed and we were unable to recover it. 00:29:57.241 [2024-07-14 03:17:52.487787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.241 [2024-07-14 03:17:52.487960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.241 [2024-07-14 03:17:52.487987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.241 [2024-07-14 03:17:52.488003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.241 [2024-07-14 03:17:52.488017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.241 [2024-07-14 03:17:52.488047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.241 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.497783] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.497946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.497975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.500 [2024-07-14 03:17:52.497990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.500 [2024-07-14 03:17:52.498004] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.500 [2024-07-14 03:17:52.498034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.500 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.507818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.507987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.508013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.500 [2024-07-14 03:17:52.508028] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.500 [2024-07-14 03:17:52.508042] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.500 [2024-07-14 03:17:52.508072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.500 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.517886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.518039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.518065] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.500 [2024-07-14 03:17:52.518080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.500 [2024-07-14 03:17:52.518094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.500 [2024-07-14 03:17:52.518123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.500 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.527897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.528064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.528089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.500 [2024-07-14 03:17:52.528103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.500 [2024-07-14 03:17:52.528117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.500 [2024-07-14 03:17:52.528146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.500 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.537905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.538061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.538087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.500 [2024-07-14 03:17:52.538102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.500 [2024-07-14 03:17:52.538116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.500 [2024-07-14 03:17:52.538145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.500 qpair failed and we were unable to recover it. 00:29:57.500 [2024-07-14 03:17:52.548022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.500 [2024-07-14 03:17:52.548196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.500 [2024-07-14 03:17:52.548222] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.548237] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.548251] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.548280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.557960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.558119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.558150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.558170] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.558183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.558212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.568034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.568226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.568252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.568283] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.568296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.568325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.578083] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.578288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.578316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.578335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.578350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.578394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.588128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.588333] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.588359] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.588374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.588388] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.588417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.598090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.598249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.598275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.598290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.598304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.598339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.608135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.608296] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.608321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.608336] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.608350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.608394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.618188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.618379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.618404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.618419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.618434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.618462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.628173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.628328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.628354] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.628368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.628383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.628411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.638296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.638454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.638480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.638494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.638509] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.638537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.648296] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.648459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.648492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.648509] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.648523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.648569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.658286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.658451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.658477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.658492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.658507] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.658536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.668301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.668459] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.668486] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.668501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.668515] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.668544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.678320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.678474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.678501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.678516] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.678530] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.678559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.501 qpair failed and we were unable to recover it. 00:29:57.501 [2024-07-14 03:17:52.688396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.501 [2024-07-14 03:17:52.688604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.501 [2024-07-14 03:17:52.688629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.501 [2024-07-14 03:17:52.688644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.501 [2024-07-14 03:17:52.688658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.501 [2024-07-14 03:17:52.688692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.698417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.698574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.698600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.698615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.698629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.698658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.708452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.708618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.708643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.708658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.708672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.708715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.718438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.718595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.718621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.718636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.718651] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.718679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.728500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.728716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.728742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.728762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.728777] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.728821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.738513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.738669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.738700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.738716] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.738730] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.738759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.502 [2024-07-14 03:17:52.748543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.502 [2024-07-14 03:17:52.748700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.502 [2024-07-14 03:17:52.748728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.502 [2024-07-14 03:17:52.748744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.502 [2024-07-14 03:17:52.748758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.502 [2024-07-14 03:17:52.748788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.502 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.758559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.761 [2024-07-14 03:17:52.758722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.761 [2024-07-14 03:17:52.758750] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.761 [2024-07-14 03:17:52.758766] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.761 [2024-07-14 03:17:52.758781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.761 [2024-07-14 03:17:52.758811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.768615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.761 [2024-07-14 03:17:52.768805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.761 [2024-07-14 03:17:52.768832] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.761 [2024-07-14 03:17:52.768852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.761 [2024-07-14 03:17:52.768874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.761 [2024-07-14 03:17:52.768909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.778707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.761 [2024-07-14 03:17:52.778871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.761 [2024-07-14 03:17:52.778899] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.761 [2024-07-14 03:17:52.778914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.761 [2024-07-14 03:17:52.778928] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.761 [2024-07-14 03:17:52.778963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.788704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.761 [2024-07-14 03:17:52.788862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.761 [2024-07-14 03:17:52.788894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.761 [2024-07-14 03:17:52.788909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.761 [2024-07-14 03:17:52.788923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.761 [2024-07-14 03:17:52.788953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.798672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.761 [2024-07-14 03:17:52.798827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.761 [2024-07-14 03:17:52.798853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.761 [2024-07-14 03:17:52.798874] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.761 [2024-07-14 03:17:52.798890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.761 [2024-07-14 03:17:52.798920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.761 qpair failed and we were unable to recover it. 00:29:57.761 [2024-07-14 03:17:52.808741] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.808940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.808966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.808981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.808995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.809024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.818718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.818889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.818915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.818930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.818944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.818973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.828788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.828970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.829001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.829017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.829032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.829061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.838793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.838964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.838990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.839005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.839019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.839048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.848843] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.849047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.849073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.849088] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.849102] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.849131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.858955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.859135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.859162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.859192] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.859205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.859255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.868903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.869055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.869081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.869096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.869110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.869153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.878946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.879109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.879135] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.879150] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.879164] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.879193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.888998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.889161] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.889187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.889202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.889215] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.889243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.898992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.899143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.899168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.899183] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.899198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.899226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.909013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.909170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.909197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.909212] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.909226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.909255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.919065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.919219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.919250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.919265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.919279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.919324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.929119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.929324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.929350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.929365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.929380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.929413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.939101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.939249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.939275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.762 [2024-07-14 03:17:52.939290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.762 [2024-07-14 03:17:52.939303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.762 [2024-07-14 03:17:52.939331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.762 qpair failed and we were unable to recover it. 00:29:57.762 [2024-07-14 03:17:52.949155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.762 [2024-07-14 03:17:52.949356] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.762 [2024-07-14 03:17:52.949384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.949403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.949417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.949447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:52.959261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:52.959424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:52.959451] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.959467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.959486] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.959517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:52.969185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:52.969350] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:52.969377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.969392] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.969406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.969435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:52.979217] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:52.979372] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:52.979399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.979414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.979428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.979457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:52.989226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:52.989374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:52.989399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.989414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.989427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.989455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:52.999253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:52.999400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:52.999427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:52.999442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:52.999456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:52.999484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:57.763 [2024-07-14 03:17:53.009291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.763 [2024-07-14 03:17:53.009456] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.763 [2024-07-14 03:17:53.009489] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.763 [2024-07-14 03:17:53.009518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.763 [2024-07-14 03:17:53.009545] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:57.763 [2024-07-14 03:17:53.009585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.763 qpair failed and we were unable to recover it. 00:29:58.022 [2024-07-14 03:17:53.019316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.022 [2024-07-14 03:17:53.019479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.022 [2024-07-14 03:17:53.019519] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.022 [2024-07-14 03:17:53.019539] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.022 [2024-07-14 03:17:53.019553] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.022 [2024-07-14 03:17:53.019584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.022 qpair failed and we were unable to recover it. 00:29:58.022 [2024-07-14 03:17:53.029341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.022 [2024-07-14 03:17:53.029494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.022 [2024-07-14 03:17:53.029522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.022 [2024-07-14 03:17:53.029538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.022 [2024-07-14 03:17:53.029552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.022 [2024-07-14 03:17:53.029581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.022 qpair failed and we were unable to recover it. 00:29:58.022 [2024-07-14 03:17:53.039392] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.022 [2024-07-14 03:17:53.039546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.022 [2024-07-14 03:17:53.039572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.022 [2024-07-14 03:17:53.039588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.022 [2024-07-14 03:17:53.039602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.022 [2024-07-14 03:17:53.039631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.022 qpair failed and we were unable to recover it. 00:29:58.022 [2024-07-14 03:17:53.049444] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.022 [2024-07-14 03:17:53.049600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.022 [2024-07-14 03:17:53.049627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.022 [2024-07-14 03:17:53.049643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.022 [2024-07-14 03:17:53.049662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.022 [2024-07-14 03:17:53.049692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.022 qpair failed and we were unable to recover it. 00:29:58.022 [2024-07-14 03:17:53.059465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.022 [2024-07-14 03:17:53.059620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.022 [2024-07-14 03:17:53.059647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.022 [2024-07-14 03:17:53.059663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.059677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.059708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.069571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.069734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.069761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.069776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.069790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.069833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.079502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.079656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.079683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.079699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.079713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.079742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.089546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.089702] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.089729] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.089745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.089759] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.089787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.099650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.099813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.099839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.099855] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.099876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.099907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.109610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.109763] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.109789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.109805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.109818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.109846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.119642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.119789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.119816] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.119831] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.119845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.119880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.129664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.129830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.129856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.129880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.129895] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.129924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.139690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.139848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.139882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.139898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.139918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.139947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.149788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.149944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.149971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.149986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.149999] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.150028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.159770] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.159934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.159961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.159976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.159989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.160018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.169795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.169953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.169980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.169995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.170009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.170037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.179804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.179965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.179992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.180007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.180021] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.180049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.189856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.190024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.190051] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.190067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.190081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.023 [2024-07-14 03:17:53.190109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.023 qpair failed and we were unable to recover it. 00:29:58.023 [2024-07-14 03:17:53.199842] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.023 [2024-07-14 03:17:53.200002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.023 [2024-07-14 03:17:53.200029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.023 [2024-07-14 03:17:53.200045] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.023 [2024-07-14 03:17:53.200059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.200088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.210040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.210196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.210224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.210240] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.210253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.210296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.219962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.220127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.220154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.220169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.220182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.220211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.230004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.230194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.230234] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.230249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.230269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.230313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.240004] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.240167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.240194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.240209] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.240223] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.240251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.250029] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.250178] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.250205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.250220] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.250234] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.250276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.260154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.260311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.260337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.260353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.260382] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.260411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.024 [2024-07-14 03:17:53.270059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.024 [2024-07-14 03:17:53.270226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.024 [2024-07-14 03:17:53.270253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.024 [2024-07-14 03:17:53.270268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.024 [2024-07-14 03:17:53.270281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.024 [2024-07-14 03:17:53.270311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.024 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.280122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.280322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.280351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.280381] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.280406] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.280454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.290149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.290303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.290329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.290344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.290358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.290387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.300176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.300331] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.300357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.300372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.300385] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.300415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.310192] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.310343] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.310368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.310383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.310396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.310425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.320204] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.320367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.320393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.320414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.320429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.320458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.330244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.330402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.330427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.330442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.330455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.330484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.340314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.340494] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.340521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.340551] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.340564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.340592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.350327] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.350481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.350507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.350522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.350535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.350564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.360312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.360465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.282 [2024-07-14 03:17:53.360491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.282 [2024-07-14 03:17:53.360507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.282 [2024-07-14 03:17:53.360520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.282 [2024-07-14 03:17:53.360550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.282 qpair failed and we were unable to recover it. 00:29:58.282 [2024-07-14 03:17:53.370403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.282 [2024-07-14 03:17:53.370566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.370592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.370607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.370621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.370651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.380380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.380533] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.380558] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.380573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.380586] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.380616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.390439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.390597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.390624] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.390639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.390653] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.390683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.400442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.400587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.400614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.400629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.400642] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.400671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.410479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.410632] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.410665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.410685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.410699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.410728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.420525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.420685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.420711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.420726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.420739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.420769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.430537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.430689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.430715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.430730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.430744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.430772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.440617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.440820] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.440847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.440862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.440886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.440916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.450617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.450776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.450802] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.450817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.450831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.450861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.460624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.460778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.460804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.460819] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.460834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.460863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.470667] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.470830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.470856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.470879] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.470893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.470924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.480777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.480939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.480966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.480981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.481002] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.481032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.490750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.490913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.490939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.490954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.490968] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.490997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.500787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.500955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.500982] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.283 [2024-07-14 03:17:53.501002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.283 [2024-07-14 03:17:53.501017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.283 [2024-07-14 03:17:53.501046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.283 qpair failed and we were unable to recover it. 00:29:58.283 [2024-07-14 03:17:53.510793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.283 [2024-07-14 03:17:53.510951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.283 [2024-07-14 03:17:53.510976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.284 [2024-07-14 03:17:53.511003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.284 [2024-07-14 03:17:53.511017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.284 [2024-07-14 03:17:53.511045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.284 qpair failed and we were unable to recover it. 00:29:58.284 [2024-07-14 03:17:53.520813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.284 [2024-07-14 03:17:53.520959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.284 [2024-07-14 03:17:53.520985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.284 [2024-07-14 03:17:53.521000] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.284 [2024-07-14 03:17:53.521014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.284 [2024-07-14 03:17:53.521043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.284 qpair failed and we were unable to recover it. 00:29:58.284 [2024-07-14 03:17:53.530817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.284 [2024-07-14 03:17:53.530983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.284 [2024-07-14 03:17:53.531009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.284 [2024-07-14 03:17:53.531025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.284 [2024-07-14 03:17:53.531039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.284 [2024-07-14 03:17:53.531068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.284 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.540858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.541021] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.541050] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.541066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.541079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.541110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.550890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.551066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.551093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.551109] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.551123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.551153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.560916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.561100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.561128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.561143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.561157] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.561185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.570961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.571137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.571163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.571179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.571192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.571236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.581062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.581227] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.581254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.581269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.581298] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.581327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.591040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.591196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.591223] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.591243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.591258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.591302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.601032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.601187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.601215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.601230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.601244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.601272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.611116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.611315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.611342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.611357] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.611370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.611398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.621118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.542 [2024-07-14 03:17:53.621281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.542 [2024-07-14 03:17:53.621307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.542 [2024-07-14 03:17:53.621323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.542 [2024-07-14 03:17:53.621337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.542 [2024-07-14 03:17:53.621365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.542 qpair failed and we were unable to recover it. 00:29:58.542 [2024-07-14 03:17:53.631159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.631311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.631337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.631352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.631365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.631394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.641145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.641301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.641327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.641342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.641356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.641385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.651210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.651365] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.651391] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.651407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.651421] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.651450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.661221] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.661404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.661431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.661446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.661459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.661488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.671237] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.671420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.671447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.671463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.671477] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.671506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.681339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.681522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.681553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.681570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.681584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.681613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.691335] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.691501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.691527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.691543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.691557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.691584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.701292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.701449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.701476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.701491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.701505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.701533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.711421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.711575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.711602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.711618] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.711632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.711659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.721379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.721526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.721552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.721567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.721582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.721610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.731408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.731595] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.731621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.731636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.731650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.731678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.741420] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.741574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.741600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.741616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.741630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.741658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.751451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.751601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.751627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.751643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.751657] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.751685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.761516] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.761692] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.761718] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.761733] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.543 [2024-07-14 03:17:53.761763] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.543 [2024-07-14 03:17:53.761793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.543 qpair failed and we were unable to recover it. 00:29:58.543 [2024-07-14 03:17:53.771514] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.543 [2024-07-14 03:17:53.771670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.543 [2024-07-14 03:17:53.771703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.543 [2024-07-14 03:17:53.771720] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.544 [2024-07-14 03:17:53.771734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.544 [2024-07-14 03:17:53.771763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-14 03:17:53.781523] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.544 [2024-07-14 03:17:53.781693] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.544 [2024-07-14 03:17:53.781720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.544 [2024-07-14 03:17:53.781736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.544 [2024-07-14 03:17:53.781749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.544 [2024-07-14 03:17:53.781778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.544 [2024-07-14 03:17:53.791652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.544 [2024-07-14 03:17:53.791804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.544 [2024-07-14 03:17:53.791833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.544 [2024-07-14 03:17:53.791849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.544 [2024-07-14 03:17:53.791864] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.544 [2024-07-14 03:17:53.791902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.544 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.801589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.801742] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.801770] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.801786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.801800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.801829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.811682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.811852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.811888] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.811904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.811918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.811953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.821641] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.821797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.821824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.821840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.821854] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.821888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.831757] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.831953] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.831980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.831996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.832009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.832038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.841700] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.841847] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.841882] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.841899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.841915] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.841944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.851745] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.851908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.851935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.851950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.851964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.851997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.861764] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.802 [2024-07-14 03:17:53.861935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.802 [2024-07-14 03:17:53.861967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.802 [2024-07-14 03:17:53.861984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.802 [2024-07-14 03:17:53.862008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.802 [2024-07-14 03:17:53.862037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.802 qpair failed and we were unable to recover it. 00:29:58.802 [2024-07-14 03:17:53.871888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.872107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.872133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.872148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.872161] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.872189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.881815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.881977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.882004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.882019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.882032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.882062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.891878] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.892035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.892062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.892077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.892091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.892120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.901936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.902135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.902161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.902177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.902190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.902225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.911973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.912122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.912149] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.912164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.912178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.912222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.922019] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.922179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.922208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.922226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.922240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.922284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.932016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.932171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.932200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.932218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.932232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.932261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.942017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.942169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.942196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.942211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.942226] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.942256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.952062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.952213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.952244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.952261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.952276] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.952305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.962085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.962281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.962309] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.962325] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.962338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.962369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.972115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.972271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.972297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.972312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.972325] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.972355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.982198] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.982349] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.982375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.982390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.982403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.982448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:53.992165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:53.992315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:53.992339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:53.992354] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:53.992367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:53.992399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:54.002209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:54.002409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:54.002436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:54.002451] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:54.002464] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.803 [2024-07-14 03:17:54.002494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.803 qpair failed and we were unable to recover it. 00:29:58.803 [2024-07-14 03:17:54.012209] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.803 [2024-07-14 03:17:54.012375] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.803 [2024-07-14 03:17:54.012401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.803 [2024-07-14 03:17:54.012416] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.803 [2024-07-14 03:17:54.012430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.804 [2024-07-14 03:17:54.012459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.804 qpair failed and we were unable to recover it. 00:29:58.804 [2024-07-14 03:17:54.022280] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.804 [2024-07-14 03:17:54.022433] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.804 [2024-07-14 03:17:54.022460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.804 [2024-07-14 03:17:54.022475] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.804 [2024-07-14 03:17:54.022489] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.804 [2024-07-14 03:17:54.022519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.804 qpair failed and we were unable to recover it. 00:29:58.804 [2024-07-14 03:17:54.032254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.804 [2024-07-14 03:17:54.032406] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.804 [2024-07-14 03:17:54.032432] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.804 [2024-07-14 03:17:54.032448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.804 [2024-07-14 03:17:54.032461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.804 [2024-07-14 03:17:54.032490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.804 qpair failed and we were unable to recover it. 00:29:58.804 [2024-07-14 03:17:54.042337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.804 [2024-07-14 03:17:54.042519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.804 [2024-07-14 03:17:54.042566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.804 [2024-07-14 03:17:54.042583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.804 [2024-07-14 03:17:54.042597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.804 [2024-07-14 03:17:54.042626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.804 qpair failed and we were unable to recover it. 00:29:58.804 [2024-07-14 03:17:54.052348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.804 [2024-07-14 03:17:54.052502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.804 [2024-07-14 03:17:54.052530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.804 [2024-07-14 03:17:54.052546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.804 [2024-07-14 03:17:54.052563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:58.804 [2024-07-14 03:17:54.052611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.804 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.062439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.062604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.062632] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.062648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.062663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.062693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.072372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.072576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.072603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.072619] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.072632] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.072663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.082384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.082535] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.082561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.082576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.082591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.082626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.092465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.092620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.092647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.092662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.092675] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.092705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.102450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.102602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.102629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.102644] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.102659] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.102687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.112486] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.112685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.112711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.112726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.112739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.112769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.122543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.122713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.122739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.122754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.122783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.062 [2024-07-14 03:17:54.122812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.062 qpair failed and we were unable to recover it. 00:29:59.062 [2024-07-14 03:17:54.132639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.062 [2024-07-14 03:17:54.132796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.062 [2024-07-14 03:17:54.132827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.062 [2024-07-14 03:17:54.132843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.062 [2024-07-14 03:17:54.132857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.132893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.142699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.142891] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.142917] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.142932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.142946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.142975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.152664] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.152811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.152837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.152852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.152872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.152903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.162686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.162877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.162903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.162918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.162932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.162962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.172734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.172914] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.172939] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.172954] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.172973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.173003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.182687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.182842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.182873] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.182890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.182905] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.182934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.192762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.192926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.192953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.192968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.192981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.193012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.202746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.202939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.202964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.202980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.202993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.203023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.212874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.213027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.213053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.213068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.213081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.213111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.222839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.223012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.223039] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.223054] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.223069] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.223097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.232857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.233043] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.233069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.233084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.233098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.233128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.242945] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.243110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.243136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.243151] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.243166] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.243195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.252939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.253098] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.253124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.253139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.253153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.253197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.262937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.263121] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.263147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.263162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.263181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.263211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.063 [2024-07-14 03:17:54.272944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.063 [2024-07-14 03:17:54.273141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.063 [2024-07-14 03:17:54.273167] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.063 [2024-07-14 03:17:54.273182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.063 [2024-07-14 03:17:54.273195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.063 [2024-07-14 03:17:54.273224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.063 qpair failed and we were unable to recover it. 00:29:59.064 [2024-07-14 03:17:54.283099] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.064 [2024-07-14 03:17:54.283286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.064 [2024-07-14 03:17:54.283311] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.064 [2024-07-14 03:17:54.283326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.064 [2024-07-14 03:17:54.283339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.064 [2024-07-14 03:17:54.283369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.064 qpair failed and we were unable to recover it. 00:29:59.064 [2024-07-14 03:17:54.293026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.064 [2024-07-14 03:17:54.293220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.064 [2024-07-14 03:17:54.293247] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.064 [2024-07-14 03:17:54.293276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.064 [2024-07-14 03:17:54.293290] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.064 [2024-07-14 03:17:54.293333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.064 qpair failed and we were unable to recover it. 00:29:59.064 [2024-07-14 03:17:54.303126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.064 [2024-07-14 03:17:54.303282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.064 [2024-07-14 03:17:54.303307] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.064 [2024-07-14 03:17:54.303323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.064 [2024-07-14 03:17:54.303338] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.064 [2024-07-14 03:17:54.303367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.064 qpair failed and we were unable to recover it. 00:29:59.064 [2024-07-14 03:17:54.313064] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.064 [2024-07-14 03:17:54.313267] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.064 [2024-07-14 03:17:54.313295] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.064 [2024-07-14 03:17:54.313311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.064 [2024-07-14 03:17:54.313326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.064 [2024-07-14 03:17:54.313356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.064 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.323093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.323244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.323272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.323287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.323302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.323332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.333196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.333359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.333385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.333400] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.333414] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.333460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.343165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.343337] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.343364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.343380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.343393] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.343423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.353261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.353419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.353445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.353460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.353479] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.353509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.363243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.363412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.363439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.363453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.363467] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.363496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.373305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.373474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.373500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.373530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.373544] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.373587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.383320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.383474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.383500] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.383515] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.383528] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.383558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.393385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.393543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.393568] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.393583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.393596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.393626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.403366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.403524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.403550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.403565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.403579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.403623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.413432] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.413625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.413665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.413680] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.413695] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.413723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.423390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.423543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.423569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.423584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.423597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.423627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.433438] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.433597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.433623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.433639] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.433652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.433683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.443448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.443646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.443673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.443689] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.443708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.443738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.453554] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.323 [2024-07-14 03:17:54.453709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.323 [2024-07-14 03:17:54.453735] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.323 [2024-07-14 03:17:54.453750] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.323 [2024-07-14 03:17:54.453764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.323 [2024-07-14 03:17:54.453808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.323 qpair failed and we were unable to recover it. 00:29:59.323 [2024-07-14 03:17:54.463511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.463668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.463694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.463709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.463722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.463751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.473552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.473714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.473740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.473755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.473769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.473798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.483564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.483727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.483752] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.483767] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.483782] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.483810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.493585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.493737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.493763] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.493778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.493792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.493821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.503614] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.503761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.503787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.503802] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.503816] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.503845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.513686] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.513846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.513879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.513896] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.513910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.513939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.523677] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.523828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.523872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.523890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.523904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.523934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.533810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.533983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.534009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.534031] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.534046] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.534075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.543785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.543991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.544018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.544033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.544047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.544075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.553762] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.553925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.553951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.553966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.553981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.554010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.563804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.563960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.563986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.564001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.564015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.564044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.324 [2024-07-14 03:17:54.573873] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.324 [2024-07-14 03:17:54.574036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.324 [2024-07-14 03:17:54.574064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.324 [2024-07-14 03:17:54.574080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.324 [2024-07-14 03:17:54.574095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.324 [2024-07-14 03:17:54.574124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.324 qpair failed and we were unable to recover it. 00:29:59.583 [2024-07-14 03:17:54.583967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.583 [2024-07-14 03:17:54.584133] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.583 [2024-07-14 03:17:54.584161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.583 [2024-07-14 03:17:54.584177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.583 [2024-07-14 03:17:54.584192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.583 [2024-07-14 03:17:54.584232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.583 qpair failed and we were unable to recover it. 00:29:59.583 [2024-07-14 03:17:54.593922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.583 [2024-07-14 03:17:54.594078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.583 [2024-07-14 03:17:54.594104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.583 [2024-07-14 03:17:54.594120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.583 [2024-07-14 03:17:54.594134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.594174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.603935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.604108] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.604133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.604159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.604172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.604202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.614030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.614216] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.614257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.614272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.614286] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.614313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.623992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.624155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.624181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.624202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.624217] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.624247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.634038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.634244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.634270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.634285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.634299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.634328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.644054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.644211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.644237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.644252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.644267] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.644311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.654100] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.654252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.654279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.654295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.654309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.654338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.664146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.664352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.664378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.664394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.664408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.664436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.674181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.674330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.674355] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.674370] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.674384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.674428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.684157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.684313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.684339] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.684353] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.684367] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.684397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.694229] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.694438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.694463] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.694478] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.694492] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.694520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.704234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.704419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.704445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.704460] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.704474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.704502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.714279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.714439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.714465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.714488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.714503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.714532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.724285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.724443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.724469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.724484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.724498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.724527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.584 qpair failed and we were unable to recover it. 00:29:59.584 [2024-07-14 03:17:54.734313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.584 [2024-07-14 03:17:54.734476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.584 [2024-07-14 03:17:54.734501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.584 [2024-07-14 03:17:54.734517] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.584 [2024-07-14 03:17:54.734531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.584 [2024-07-14 03:17:54.734559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.744389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.744571] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.744597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.744612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.744626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.744655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.754398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.754589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.754614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.754629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.754643] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.754672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.764424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.764577] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.764602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.764617] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.764631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.764660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.774471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.774628] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.774653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.774669] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.774683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.774712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.784453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.784630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.784656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.784670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.784685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.784713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.794500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.794664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.794689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.794704] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.794718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.794761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.804593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.804746] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.804772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.804792] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.804807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.804836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.814559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.814736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.814761] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.814776] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.814790] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.814819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.824622] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.824775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.824800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.824815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.824829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.824858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.585 [2024-07-14 03:17:54.834588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.585 [2024-07-14 03:17:54.834737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.585 [2024-07-14 03:17:54.834765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.585 [2024-07-14 03:17:54.834781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.585 [2024-07-14 03:17:54.834795] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.585 [2024-07-14 03:17:54.834825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.585 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.844617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.844764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.844792] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.844807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.844822] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.844858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.854701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.854906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.854933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.854948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.854962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.854992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.864672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.864824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.864851] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.864872] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.864886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.864915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.874798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.874955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.874981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.874996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.875009] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.875038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.884749] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.884918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.884944] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.884960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.884973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.885002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.894822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.894987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.895018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.895034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.895047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.895076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.904794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.904986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.905013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.905029] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.905043] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.905072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.914837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.914999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.915026] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.915041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.915055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.915084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.924891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.925071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.925099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.925114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.925127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.925170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.934893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.935047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.844 [2024-07-14 03:17:54.935074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.844 [2024-07-14 03:17:54.935089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.844 [2024-07-14 03:17:54.935103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.844 [2024-07-14 03:17:54.935131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.844 qpair failed and we were unable to recover it. 00:29:59.844 [2024-07-14 03:17:54.944940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.844 [2024-07-14 03:17:54.945120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.945147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.945163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.945177] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.945221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:54.954944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:54.955099] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.955125] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.955141] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.955154] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.955182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:54.965046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:54.965203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.965230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.965261] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.965275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.965303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:54.975023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:54.975202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.975228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.975243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.975257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.975285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:54.985053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:54.985234] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.985266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.985282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.985296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.985324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:54.995071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:54.995230] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:54.995255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:54.995270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:54.995283] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:54.995311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.005086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.005239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.005267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.005282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.005296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.005325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.015144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.015301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.015327] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.015343] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.015357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.015385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.025145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.025304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.025331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.025346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.025360] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.025394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.035196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.035379] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.035406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.035421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.035436] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.035464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.045312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.045525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.045567] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.045583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.045596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.045625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.055330] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.055527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.055554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.055569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.055583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.055612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.065354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.065553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.065597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.065614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.065627] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.065670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.075326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.075484] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.075516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.075532] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.075547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.845 [2024-07-14 03:17:55.075590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.845 qpair failed and we were unable to recover it. 00:29:59.845 [2024-07-14 03:17:55.085339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.845 [2024-07-14 03:17:55.085490] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.845 [2024-07-14 03:17:55.085518] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.845 [2024-07-14 03:17:55.085533] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.845 [2024-07-14 03:17:55.085547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.846 [2024-07-14 03:17:55.085576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.846 qpair failed and we were unable to recover it. 00:29:59.846 [2024-07-14 03:17:55.095371] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:59.846 [2024-07-14 03:17:55.095537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:59.846 [2024-07-14 03:17:55.095565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:59.846 [2024-07-14 03:17:55.095582] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:59.846 [2024-07-14 03:17:55.095596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:29:59.846 [2024-07-14 03:17:55.095626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:59.846 qpair failed and we were unable to recover it. 00:30:00.104 [2024-07-14 03:17:55.105502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.104 [2024-07-14 03:17:55.105719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.104 [2024-07-14 03:17:55.105747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.104 [2024-07-14 03:17:55.105762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.104 [2024-07-14 03:17:55.105775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.104 [2024-07-14 03:17:55.105818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.104 qpair failed and we were unable to recover it. 00:30:00.104 [2024-07-14 03:17:55.115493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.104 [2024-07-14 03:17:55.115643] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.104 [2024-07-14 03:17:55.115670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.104 [2024-07-14 03:17:55.115685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.104 [2024-07-14 03:17:55.115699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.104 [2024-07-14 03:17:55.115733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.104 qpair failed and we were unable to recover it. 00:30:00.104 [2024-07-14 03:17:55.125439] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.104 [2024-07-14 03:17:55.125593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.104 [2024-07-14 03:17:55.125620] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.104 [2024-07-14 03:17:55.125636] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.104 [2024-07-14 03:17:55.125649] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.104 [2024-07-14 03:17:55.125678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.104 qpair failed and we were unable to recover it. 00:30:00.104 [2024-07-14 03:17:55.135493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.104 [2024-07-14 03:17:55.135659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.104 [2024-07-14 03:17:55.135686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.104 [2024-07-14 03:17:55.135702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.104 [2024-07-14 03:17:55.135716] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.104 [2024-07-14 03:17:55.135744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.104 qpair failed and we were unable to recover it. 00:30:00.104 [2024-07-14 03:17:55.145510] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.104 [2024-07-14 03:17:55.145696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.104 [2024-07-14 03:17:55.145724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.145743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.145758] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.145787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.155589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.155745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.155773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.155788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.155817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.155847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.165581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.165732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.165764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.165780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.165794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.165823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.175591] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.175745] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.175772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.175788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.175802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.175830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.185626] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.185780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.185806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.185822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.185835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.185864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.195636] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.195784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.195811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.195826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.195839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.195874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.205680] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.205876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.205902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.205918] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.205932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.205965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.215801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.215959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.215986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.216001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.216015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.216043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.225773] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.225935] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.225962] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.225978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.225992] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.226020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.235789] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.235946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.235973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.235989] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.236003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.236032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.245901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.246066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.246093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.246108] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.246122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.246151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.255889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.256055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.256087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.256104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.256117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.256146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.265888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.266044] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.266071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.266086] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.266100] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.266128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.275909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.276109] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.276137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.276153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.276167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.276196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.105 [2024-07-14 03:17:55.285994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.105 [2024-07-14 03:17:55.286150] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.105 [2024-07-14 03:17:55.286177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.105 [2024-07-14 03:17:55.286193] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.105 [2024-07-14 03:17:55.286207] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.105 [2024-07-14 03:17:55.286235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.105 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.295958] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.296119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.296147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.296163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.296176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.296210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.305963] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.306117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.306144] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.306159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.306173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.306201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.316026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.316180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.316206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.316221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.316235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.316264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.326028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.326223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.326249] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.326264] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.326278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.326306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.336077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.336264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.336290] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.336306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.336319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.336348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.346092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.346247] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.346278] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.346294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.346308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.346336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.106 [2024-07-14 03:17:55.356152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.106 [2024-07-14 03:17:55.356314] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.106 [2024-07-14 03:17:55.356347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.106 [2024-07-14 03:17:55.356378] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.106 [2024-07-14 03:17:55.356405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.106 [2024-07-14 03:17:55.356453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.106 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.366191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.366342] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.366371] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.366387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.366401] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.366431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.376277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.376432] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.376460] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.376476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.376490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.376518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.386205] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.386362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.386389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.386405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.386426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.386456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.396243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.396403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.396430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.396446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.396459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.396487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.406310] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.406496] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.406523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.406538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.406552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.406581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.416305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.416516] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.416543] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.416558] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.416572] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.416616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.426341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.426501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.426530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.426550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.426563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.426593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.436366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.436526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.436554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.436569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.436583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.436612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.446408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.446556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.446583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.446598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.446611] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.446640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.456463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.456668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.456694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.456709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.456723] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.456767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.466480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.466665] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.466693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.466710] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.466727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.466772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.365 [2024-07-14 03:17:55.476468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.365 [2024-07-14 03:17:55.476621] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.365 [2024-07-14 03:17:55.476648] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.365 [2024-07-14 03:17:55.476664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.365 [2024-07-14 03:17:55.476683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.365 [2024-07-14 03:17:55.476712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.365 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.486520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.486673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.486701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.486715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.486729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.486758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.496599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.496794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.496822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.496838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.496852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.496888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.506568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.506766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.506793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.506808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.506821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.506849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.516607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.516756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.516782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.516798] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.516812] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.516840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.526652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.526809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.526836] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.526851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.526873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.526904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.536690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.536846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.536881] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.536898] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.536911] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.536940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.546691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.546845] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.546877] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.546894] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.546908] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.546937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.556721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.556929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.556956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.556971] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.556986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.557016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.566748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.566904] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.566931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.566946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.566966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.566996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.576820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.577028] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.577054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.577069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.577083] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.577112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.586817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.586991] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.587017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.587032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.587045] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.587074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.596927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.597075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.597100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.597114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.597128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.597156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.606917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.607063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.607088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.607102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.607115] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.607143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.366 [2024-07-14 03:17:55.616941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.366 [2024-07-14 03:17:55.617160] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.366 [2024-07-14 03:17:55.617188] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.366 [2024-07-14 03:17:55.617203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.366 [2024-07-14 03:17:55.617216] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.366 [2024-07-14 03:17:55.617246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.366 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.627002] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.627157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.627185] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.627200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.627213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.627243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.636986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.637137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.637163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.637178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.637191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.637220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.647080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.647228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.647253] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.647268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.647281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.647310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.657074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.657276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.657302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.657316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.657335] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.657365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.667106] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.667255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.667281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.667295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.667309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.667337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.677116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.677271] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.677298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.677313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.677327] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.677355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.687115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.687263] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.687289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.687304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.687318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.687347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.697219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.697408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.697434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.697448] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.697462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.697490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.707295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.707453] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.707478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.707493] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.707506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.707534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.717257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.717444] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.717469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.717483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.717496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.717525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.727348] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.727499] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.727524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.727538] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.727551] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.727579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.737320] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.737491] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.737516] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.737530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.737543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.626 [2024-07-14 03:17:55.737572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.626 qpair failed and we were unable to recover it. 00:30:00.626 [2024-07-14 03:17:55.747358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.626 [2024-07-14 03:17:55.747526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.626 [2024-07-14 03:17:55.747553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.626 [2024-07-14 03:17:55.747577] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.626 [2024-07-14 03:17:55.747592] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.747622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.757361] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.757543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.757571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.757586] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.757600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.757630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.767372] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.767520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.767546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.767561] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.767574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.767603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.777440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.777594] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.777619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.777634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.777647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.777675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.787461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.787661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.787686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.787700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.787713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.787742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.797455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.797605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.797631] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.797645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.797658] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.797686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.807515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.807661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.807686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.807700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.807713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.807741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.817572] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.817748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.817773] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.817788] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.817801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.817829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.827532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.827682] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.827707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.827721] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.827734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.827762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.837574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.837729] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.837754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.837775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.837789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.837818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.847660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.847813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.847838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.847852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.847871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.847901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.857703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.857888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.857913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.857927] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.857940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.857968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.867712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.867871] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.867898] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.867912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.867926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.867957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.627 [2024-07-14 03:17:55.877730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.627 [2024-07-14 03:17:55.877931] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.627 [2024-07-14 03:17:55.877961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.627 [2024-07-14 03:17:55.877976] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.627 [2024-07-14 03:17:55.877990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.627 [2024-07-14 03:17:55.878020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.627 qpair failed and we were unable to recover it. 00:30:00.886 [2024-07-14 03:17:55.887792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.886 [2024-07-14 03:17:55.887975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.886 [2024-07-14 03:17:55.888002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.888016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.888030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.888059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.897765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.897963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.897990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.898005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.898017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.898045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.907788] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.907993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.908019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.908033] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.908047] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.908075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.917826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.917981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.918006] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.918020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.918034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.918062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.927876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.928039] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.928064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.928085] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.928098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.928127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.937917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.938081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.938107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.938121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.938134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.938162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.947902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.948055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.948080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.948094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.948108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.948136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.957941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.958129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.958154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.958168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.958182] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.958210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.967967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.968144] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.968168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.968182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.968195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.968223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.978015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.978176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.978202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.978221] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.978235] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.978264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.988016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.988168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.988194] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.988208] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.988221] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.988249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:55.998067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:55.998219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:55.998243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:55.998257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:55.998269] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:55.998296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:56.008071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:56.008217] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:56.008243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:56.008258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:56.008271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:56.008298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:56.018114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:56.018276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:56.018302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:56.018323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:56.018337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:56.018366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.887 [2024-07-14 03:17:56.028129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.887 [2024-07-14 03:17:56.028278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.887 [2024-07-14 03:17:56.028303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.887 [2024-07-14 03:17:56.028317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.887 [2024-07-14 03:17:56.028330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.887 [2024-07-14 03:17:56.028358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.887 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.038269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.038423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.038448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.038462] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.038476] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.038504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.048212] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.048359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.048385] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.048399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.048413] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.048441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.058215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.058374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.058399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.058413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.058426] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.058454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.068290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.068443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.068468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.068482] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.068496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.068524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.078287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.078471] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.078496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.078510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.078523] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.078552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.088321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.088467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.088492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.088507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.088520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.088548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.098458] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.098627] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.098652] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.098666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.098679] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.098709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.108427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.108604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.108634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.108649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.108662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.108690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.118390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.118543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.118569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.118583] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.118595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.118624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.128430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.128580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.128606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.128620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.128633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.128661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:00.888 [2024-07-14 03:17:56.138465] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:00.888 [2024-07-14 03:17:56.138634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:00.888 [2024-07-14 03:17:56.138671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:00.888 [2024-07-14 03:17:56.138700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:00.888 [2024-07-14 03:17:56.138727] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:00.888 [2024-07-14 03:17:56.138773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:00.888 qpair failed and we were unable to recover it. 00:30:01.147 [2024-07-14 03:17:56.148500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.147 [2024-07-14 03:17:56.148696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.147 [2024-07-14 03:17:56.148723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.147 [2024-07-14 03:17:56.148738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.147 [2024-07-14 03:17:56.148751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.147 [2024-07-14 03:17:56.148781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.147 qpair failed and we were unable to recover it. 00:30:01.147 [2024-07-14 03:17:56.158546] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.147 [2024-07-14 03:17:56.158704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.147 [2024-07-14 03:17:56.158730] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.147 [2024-07-14 03:17:56.158744] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.147 [2024-07-14 03:17:56.158757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.147 [2024-07-14 03:17:56.158784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.147 qpair failed and we were unable to recover it. 00:30:01.147 [2024-07-14 03:17:56.168604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.147 [2024-07-14 03:17:56.168800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.147 [2024-07-14 03:17:56.168825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.147 [2024-07-14 03:17:56.168838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.147 [2024-07-14 03:17:56.168851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.147 [2024-07-14 03:17:56.168886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.147 qpair failed and we were unable to recover it. 00:30:01.147 [2024-07-14 03:17:56.178581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.147 [2024-07-14 03:17:56.178739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.147 [2024-07-14 03:17:56.178764] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.147 [2024-07-14 03:17:56.178778] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.147 [2024-07-14 03:17:56.178791] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.147 [2024-07-14 03:17:56.178819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.147 qpair failed and we were unable to recover it. 00:30:01.147 [2024-07-14 03:17:56.188656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.147 [2024-07-14 03:17:56.188812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.188837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.188851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.188875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.188907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.198655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.198804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.198837] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.198852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.198872] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.198902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.208708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.208874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.208903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.208919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.208932] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.208961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.218727] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.218889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.218915] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.218930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.218943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.218972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.228747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.228903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.228930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.228944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.228957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.228985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.238759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.238913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.238938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.238953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.238966] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.239000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.248784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.248946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.248972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.248986] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.249000] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.249028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.258891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.259081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.259106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.259121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.259134] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.259162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.268881] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.269078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.269104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.269118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.269131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.269160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.278934] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.279134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.279159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.279173] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.279187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.279215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.288904] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.289050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.289080] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.289095] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.289108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.289137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.298973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.299136] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.299162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.299176] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.299189] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.299217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.309052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.309203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.309229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.309243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.309257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.309287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.318987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.319148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.319173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.319187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.148 [2024-07-14 03:17:56.319201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.148 [2024-07-14 03:17:56.319229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.148 qpair failed and we were unable to recover it. 00:30:01.148 [2024-07-14 03:17:56.329052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.148 [2024-07-14 03:17:56.329236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.148 [2024-07-14 03:17:56.329261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.148 [2024-07-14 03:17:56.329276] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.329289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.329323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.339098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.339252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.339276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.339291] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.339304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.339332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.349194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.349354] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.349380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.349394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.349407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.349435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.359110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.359260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.359285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.359299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.359312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.359340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.369188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.369341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.369365] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.369380] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.369392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.369421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.379303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.379469] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.379499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.379513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.379527] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.379555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.389305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.389452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.389477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.389491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.389505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.389532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.149 [2024-07-14 03:17:56.399266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.149 [2024-07-14 03:17:56.399437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.149 [2024-07-14 03:17:56.399476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.149 [2024-07-14 03:17:56.399505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.149 [2024-07-14 03:17:56.399533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.149 [2024-07-14 03:17:56.399578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.149 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.409313] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.409483] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.409511] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.409526] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.409539] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.409568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.419323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.419472] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.419498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.419513] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.419526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.419560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.429355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.429507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.429533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.429547] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.429560] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.429589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.439430] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.439584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.439610] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.439624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.439637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.439665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.449387] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.449562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.449586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.449601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.449614] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.449642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.459457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.459609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.459635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.408 [2024-07-14 03:17:56.459649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.408 [2024-07-14 03:17:56.459663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.408 [2024-07-14 03:17:56.459691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.408 qpair failed and we were unable to recover it. 00:30:01.408 [2024-07-14 03:17:56.469453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.408 [2024-07-14 03:17:56.469646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.408 [2024-07-14 03:17:56.469676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.469692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.469705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.469733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.479493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.479659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.479684] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.479698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.479712] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.479739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.489571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.489723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.489748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.489763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.489776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.489804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.499522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.499678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.499703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.499717] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.499731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.499758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.509656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.509814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.509839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.509853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.509873] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.509909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.519586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.519760] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.519785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.519800] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.519813] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.519841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.529629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.529777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.529803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.529817] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.529830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.529858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.539682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.539839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.539872] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.539889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.539904] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.539933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.549657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.549809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.549834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.549847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.549860] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.549895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.559724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.559918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.559948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.559963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.559976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.560004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.569740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.569893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.569918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.569933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.569946] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.569974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.579794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.579965] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.579991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.580005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.580019] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.580047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.589797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.589951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.589976] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.589990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.590003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.590032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.599821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.599977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.600003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.600017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.600035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.600064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.409 qpair failed and we were unable to recover it. 00:30:01.409 [2024-07-14 03:17:56.609847] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.409 [2024-07-14 03:17:56.610045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.409 [2024-07-14 03:17:56.610069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.409 [2024-07-14 03:17:56.610083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.409 [2024-07-14 03:17:56.610096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.409 [2024-07-14 03:17:56.610124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-14 03:17:56.619923] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.410 [2024-07-14 03:17:56.620075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.410 [2024-07-14 03:17:56.620100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.410 [2024-07-14 03:17:56.620114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.410 [2024-07-14 03:17:56.620127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.410 [2024-07-14 03:17:56.620155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-14 03:17:56.629895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.410 [2024-07-14 03:17:56.630046] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.410 [2024-07-14 03:17:56.630071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.410 [2024-07-14 03:17:56.630084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.410 [2024-07-14 03:17:56.630098] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.410 [2024-07-14 03:17:56.630125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-14 03:17:56.640013] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.410 [2024-07-14 03:17:56.640158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.410 [2024-07-14 03:17:56.640182] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.410 [2024-07-14 03:17:56.640196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.410 [2024-07-14 03:17:56.640209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.410 [2024-07-14 03:17:56.640238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-14 03:17:56.649977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.410 [2024-07-14 03:17:56.650129] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.410 [2024-07-14 03:17:56.650154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.410 [2024-07-14 03:17:56.650168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.410 [2024-07-14 03:17:56.650181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.410 [2024-07-14 03:17:56.650209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.410 [2024-07-14 03:17:56.660003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.410 [2024-07-14 03:17:56.660179] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.410 [2024-07-14 03:17:56.660212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.410 [2024-07-14 03:17:56.660230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.410 [2024-07-14 03:17:56.660244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.410 [2024-07-14 03:17:56.660275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.410 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.670060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.670252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.670280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.670294] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.670308] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.670337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.680035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.680183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.680210] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.680224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.680238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.680266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.690080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.690228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.690265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.690278] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.690297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.690327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.700116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.700305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.700330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.700345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.700358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.700385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.710138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.710312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.710338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.710352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.710365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.710393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.720165] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.720353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.720388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.720402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.720416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.720443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.730216] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.730367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.730392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.730407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.730430] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.730460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.740246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.740407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.740433] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.740447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.740461] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.740489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.750314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.750465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.750490] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.750505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.750518] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.750547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.760426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.669 [2024-07-14 03:17:56.760646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.669 [2024-07-14 03:17:56.760676] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.669 [2024-07-14 03:17:56.760702] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.669 [2024-07-14 03:17:56.760715] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.669 [2024-07-14 03:17:56.760746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.669 qpair failed and we were unable to recover it. 00:30:01.669 [2024-07-14 03:17:56.770347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.770503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.770528] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.770543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.770556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.770588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.780500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.780656] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.780682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.780696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.780714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.780743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.790356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.790512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.790537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.790552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.790565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.790593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.800386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.800537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.800562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.800576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.800589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.800616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.810410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.810560] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.810585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.810599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.810612] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.810640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.820504] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.820657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.820682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.820697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.820710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.820738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.830487] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.830658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.830683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.830698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.830711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.830739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.840481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.840633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.840658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.840673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.840686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.840714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.850519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.850708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.850734] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.850748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.850761] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.850789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.860566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.860719] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.860745] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.860760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.860773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.860801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.870579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.870734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.870759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.870774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.870792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.870821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.880725] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.880926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.880952] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.880968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.880981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.881009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.890717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.890864] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.890896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.890911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.890923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.890954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.900707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.900903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.900928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.900942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.670 [2024-07-14 03:17:56.900955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.670 [2024-07-14 03:17:56.900982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.670 qpair failed and we were unable to recover it. 00:30:01.670 [2024-07-14 03:17:56.910708] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.670 [2024-07-14 03:17:56.910892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.670 [2024-07-14 03:17:56.910918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.670 [2024-07-14 03:17:56.910932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.671 [2024-07-14 03:17:56.910945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.671 [2024-07-14 03:17:56.910973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.671 qpair failed and we were unable to recover it. 00:30:01.671 [2024-07-14 03:17:56.920716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.671 [2024-07-14 03:17:56.920887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.671 [2024-07-14 03:17:56.920920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.671 [2024-07-14 03:17:56.920936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.671 [2024-07-14 03:17:56.920950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.671 [2024-07-14 03:17:56.920986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.671 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.930769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.930926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.930954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.930968] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.930981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.931010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.940819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.940983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.941009] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.941024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.941037] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.941066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.950917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.951074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.951100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.951114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.951128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.951156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.960876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.961033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.961058] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.961079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.961093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.961122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.970860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.971029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.971054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.971069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.971081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.971112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.980964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.981143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.981168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.981182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.981195] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.981223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:56.990929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:56.991082] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:56.991107] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:56.991122] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:56.991135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:56.991163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.000938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:57.001087] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:57.001111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:57.001125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:57.001137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:57.001164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.011052] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:57.011205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:57.011230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:57.011244] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:57.011258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:57.011286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.021026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:57.021189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:57.021214] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:57.021229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:57.021242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:57.021269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.031061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:57.031220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:57.031244] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:57.031258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:57.031272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:57.031300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.041077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.930 [2024-07-14 03:17:57.041232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.930 [2024-07-14 03:17:57.041256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.930 [2024-07-14 03:17:57.041270] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.930 [2024-07-14 03:17:57.041284] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.930 [2024-07-14 03:17:57.041311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.930 qpair failed and we were unable to recover it. 00:30:01.930 [2024-07-14 03:17:57.051125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.051277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.051302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.051323] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.051337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.051367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.061130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.061284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.061310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.061324] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.061337] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.061365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.071250] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.071402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.071427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.071441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.071454] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.071481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.081177] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.081324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.081349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.081363] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.081376] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.081404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.091267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.091449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.091475] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.091494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.091508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.091539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.101272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.101437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.101462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.101477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.101490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.101520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.111295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.111450] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.111476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.111490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.111503] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.111531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.121385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.121531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.121556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.121571] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.121584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.121612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.131364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.131510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.131535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.131550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.131563] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.131591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.141466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.141623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.141649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.141671] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.141685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.141714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.151450] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.151609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.151635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.151649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.151662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.151690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.161519] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.161669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.161695] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.161709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.161722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.161751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.171467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.171619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.171644] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.931 [2024-07-14 03:17:57.171658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.931 [2024-07-14 03:17:57.171672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.931 [2024-07-14 03:17:57.171700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.931 qpair failed and we were unable to recover it. 00:30:01.931 [2024-07-14 03:17:57.181552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:01.931 [2024-07-14 03:17:57.181758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:01.931 [2024-07-14 03:17:57.181785] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:01.932 [2024-07-14 03:17:57.181801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:01.932 [2024-07-14 03:17:57.181825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:01.932 [2024-07-14 03:17:57.181881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:01.932 qpair failed and we were unable to recover it. 00:30:02.190 [2024-07-14 03:17:57.191528] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.190 [2024-07-14 03:17:57.191706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.190 [2024-07-14 03:17:57.191733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.190 [2024-07-14 03:17:57.191747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.190 [2024-07-14 03:17:57.191760] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.190 [2024-07-14 03:17:57.191790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.190 qpair failed and we were unable to recover it. 00:30:02.190 [2024-07-14 03:17:57.201520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.190 [2024-07-14 03:17:57.201666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.190 [2024-07-14 03:17:57.201692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.190 [2024-07-14 03:17:57.201706] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.190 [2024-07-14 03:17:57.201720] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.190 [2024-07-14 03:17:57.201749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.190 qpair failed and we were unable to recover it. 00:30:02.190 [2024-07-14 03:17:57.211576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.190 [2024-07-14 03:17:57.211733] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.190 [2024-07-14 03:17:57.211759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.190 [2024-07-14 03:17:57.211773] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.190 [2024-07-14 03:17:57.211786] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.190 [2024-07-14 03:17:57.211814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.190 qpair failed and we were unable to recover it. 00:30:02.190 [2024-07-14 03:17:57.221630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.190 [2024-07-14 03:17:57.221798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.190 [2024-07-14 03:17:57.221824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.190 [2024-07-14 03:17:57.221839] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.190 [2024-07-14 03:17:57.221852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.190 [2024-07-14 03:17:57.221887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.190 qpair failed and we were unable to recover it. 00:30:02.190 [2024-07-14 03:17:57.231704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.190 [2024-07-14 03:17:57.231852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.190 [2024-07-14 03:17:57.231891] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.190 [2024-07-14 03:17:57.231912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.190 [2024-07-14 03:17:57.231927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.191 [2024-07-14 03:17:57.231956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.241798] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.241957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.241983] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.241997] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.242010] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.191 [2024-07-14 03:17:57.242038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.251703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.251899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.251924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.251938] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.251952] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.191 [2024-07-14 03:17:57.251980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.261712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.261863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.261894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.261908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.261921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.191 [2024-07-14 03:17:57.261949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.271748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.271955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.271980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.271995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.272008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x161a350 00:30:02.191 [2024-07-14 03:17:57.272036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.281780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.281934] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.281967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.281984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.281998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4afc000b90 00:30:02.191 [2024-07-14 03:17:57.282032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.291877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.292026] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.292053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.292068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.292082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4afc000b90 00:30:02.191 [2024-07-14 03:17:57.292112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.301836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.301998] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.302031] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.302048] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.302062] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b04000b90 00:30:02.191 [2024-07-14 03:17:57.302093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.311924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.312076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.312104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.312120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.312133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4b04000b90 00:30:02.191 [2024-07-14 03:17:57.312165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.312277] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:02.191 A controller has encountered a failure and is being reset. 00:30:02.191 [2024-07-14 03:17:57.321921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.322084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.322117] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.322133] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.322148] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4af4000b90 00:30:02.191 [2024-07-14 03:17:57.322179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.331940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:02.191 [2024-07-14 03:17:57.332118] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:02.191 [2024-07-14 03:17:57.332147] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:02.191 [2024-07-14 03:17:57.332165] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:02.191 [2024-07-14 03:17:57.332179] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f4af4000b90 00:30:02.191 [2024-07-14 03:17:57.332211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:02.191 qpair failed and we were unable to recover it. 00:30:02.191 [2024-07-14 03:17:57.332323] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1627dc0 (9): Bad file descriptor 00:30:02.191 Controller properly reset. 00:30:02.191 Initializing NVMe Controllers 00:30:02.191 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.191 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:02.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:02.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:02.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:02.191 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:02.191 Initialization complete. Launching workers. 00:30:02.191 Starting thread on core 1 00:30:02.191 Starting thread on core 2 00:30:02.191 Starting thread on core 3 00:30:02.191 Starting thread on core 0 00:30:02.191 03:17:57 -- host/target_disconnect.sh@59 -- # sync 00:30:02.191 00:30:02.191 real 0m11.547s 00:30:02.191 user 0m19.459s 00:30:02.191 sys 0m5.564s 00:30:02.191 03:17:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.191 03:17:57 -- common/autotest_common.sh@10 -- # set +x 00:30:02.191 ************************************ 00:30:02.191 END TEST nvmf_target_disconnect_tc2 00:30:02.191 ************************************ 00:30:02.191 03:17:57 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:02.191 03:17:57 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:02.191 03:17:57 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:02.191 03:17:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:02.191 03:17:57 -- nvmf/common.sh@116 -- # sync 00:30:02.191 03:17:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:02.191 03:17:57 -- nvmf/common.sh@119 -- # set +e 00:30:02.191 03:17:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:02.191 03:17:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:02.191 rmmod nvme_tcp 00:30:02.191 rmmod nvme_fabrics 00:30:02.191 rmmod nvme_keyring 00:30:02.191 03:17:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:02.191 03:17:57 -- nvmf/common.sh@123 -- # set -e 00:30:02.191 03:17:57 -- nvmf/common.sh@124 -- # return 0 00:30:02.191 03:17:57 -- nvmf/common.sh@477 -- # '[' -n 2132274 ']' 00:30:02.191 03:17:57 -- nvmf/common.sh@478 -- # killprocess 2132274 00:30:02.191 03:17:57 -- common/autotest_common.sh@926 -- # '[' -z 2132274 ']' 00:30:02.191 03:17:57 -- common/autotest_common.sh@930 -- # kill -0 2132274 00:30:02.191 03:17:57 -- common/autotest_common.sh@931 -- # uname 00:30:02.450 03:17:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:02.450 03:17:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2132274 00:30:02.450 03:17:57 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:02.450 03:17:57 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:02.450 03:17:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2132274' 00:30:02.450 killing process with pid 2132274 00:30:02.450 03:17:57 -- common/autotest_common.sh@945 -- # kill 2132274 00:30:02.450 03:17:57 -- common/autotest_common.sh@950 -- # wait 2132274 00:30:02.709 03:17:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:02.709 03:17:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:02.709 03:17:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:02.709 03:17:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:02.709 03:17:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:02.709 03:17:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.709 03:17:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.709 03:17:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:04.614 03:17:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:04.614 00:30:04.614 real 0m16.225s 00:30:04.614 user 0m46.038s 00:30:04.614 sys 0m7.539s 00:30:04.614 03:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.614 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 ************************************ 00:30:04.614 END TEST nvmf_target_disconnect 00:30:04.614 ************************************ 00:30:04.614 03:17:59 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:30:04.614 03:17:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:04.614 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 03:17:59 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:30:04.614 00:30:04.614 real 22m25.746s 00:30:04.614 user 64m36.192s 00:30:04.614 sys 5m34.358s 00:30:04.614 03:17:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.614 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 ************************************ 00:30:04.614 END TEST nvmf_tcp 00:30:04.614 ************************************ 00:30:04.614 03:17:59 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:04.614 03:17:59 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:04.614 03:17:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:04.614 03:17:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:04.614 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.614 ************************************ 00:30:04.614 START TEST spdkcli_nvmf_tcp 00:30:04.614 ************************************ 00:30:04.614 03:17:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:04.873 * Looking for test storage... 00:30:04.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:04.874 03:17:59 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:04.874 03:17:59 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:04.874 03:17:59 -- nvmf/common.sh@7 -- # uname -s 00:30:04.874 03:17:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:04.874 03:17:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:04.874 03:17:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:04.874 03:17:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:04.874 03:17:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:04.874 03:17:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:04.874 03:17:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:04.874 03:17:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:04.874 03:17:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:04.874 03:17:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:04.874 03:17:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.874 03:17:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:04.874 03:17:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:04.874 03:17:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:04.874 03:17:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:04.874 03:17:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:04.874 03:17:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:04.874 03:17:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:04.874 03:17:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:04.874 03:17:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.874 03:17:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.874 03:17:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.874 03:17:59 -- paths/export.sh@5 -- # export PATH 00:30:04.874 03:17:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:04.874 03:17:59 -- nvmf/common.sh@46 -- # : 0 00:30:04.874 03:17:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:04.874 03:17:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:04.874 03:17:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:04.874 03:17:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:04.874 03:17:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:04.874 03:17:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:04.874 03:17:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:04.874 03:17:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:04.874 03:17:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:04.874 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.874 03:17:59 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:04.874 03:17:59 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2133495 00:30:04.874 03:17:59 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:04.874 03:17:59 -- spdkcli/common.sh@34 -- # waitforlisten 2133495 00:30:04.874 03:17:59 -- common/autotest_common.sh@819 -- # '[' -z 2133495 ']' 00:30:04.874 03:17:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.874 03:17:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:04.874 03:17:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.874 03:17:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:04.874 03:17:59 -- common/autotest_common.sh@10 -- # set +x 00:30:04.874 [2024-07-14 03:17:59.945539] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:04.874 [2024-07-14 03:17:59.945612] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133495 ] 00:30:04.874 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.874 [2024-07-14 03:18:00.001939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:04.874 [2024-07-14 03:18:00.094803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:04.874 [2024-07-14 03:18:00.095010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.874 [2024-07-14 03:18:00.095016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.809 03:18:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.809 03:18:00 -- common/autotest_common.sh@852 -- # return 0 00:30:05.809 03:18:00 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:05.809 03:18:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.809 03:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:05.809 03:18:00 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:05.809 03:18:00 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:05.809 03:18:00 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:05.809 03:18:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:05.809 03:18:00 -- common/autotest_common.sh@10 -- # set +x 00:30:05.809 03:18:00 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:05.809 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:05.809 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:05.809 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:05.809 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:05.810 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:05.810 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:05.810 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.810 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.810 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:05.810 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:05.810 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:05.810 ' 00:30:06.068 [2024-07-14 03:18:01.304515] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:08.596 [2024-07-14 03:18:03.478637] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.527 [2024-07-14 03:18:04.719039] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:12.119 [2024-07-14 03:18:07.010344] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:14.018 [2024-07-14 03:18:08.968886] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:15.390 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:15.390 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:15.390 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:15.390 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:15.390 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:15.390 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:15.390 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:15.390 03:18:10 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:15.390 03:18:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:15.390 03:18:10 -- common/autotest_common.sh@10 -- # set +x 00:30:15.390 03:18:10 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:15.390 03:18:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:15.390 03:18:10 -- common/autotest_common.sh@10 -- # set +x 00:30:15.390 03:18:10 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:15.390 03:18:10 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:15.956 03:18:11 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:15.956 03:18:11 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:15.956 03:18:11 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:15.956 03:18:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:15.956 03:18:11 -- common/autotest_common.sh@10 -- # set +x 00:30:15.956 03:18:11 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:15.956 03:18:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:15.956 03:18:11 -- common/autotest_common.sh@10 -- # set +x 00:30:15.956 03:18:11 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:15.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:15.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:15.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:15.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:15.956 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:15.956 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:15.956 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:15.956 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:15.956 ' 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:21.214 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:21.214 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:21.214 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:21.214 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:21.214 03:18:16 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:21.214 03:18:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:21.214 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:30:21.214 03:18:16 -- spdkcli/nvmf.sh@90 -- # killprocess 2133495 00:30:21.214 03:18:16 -- common/autotest_common.sh@926 -- # '[' -z 2133495 ']' 00:30:21.214 03:18:16 -- common/autotest_common.sh@930 -- # kill -0 2133495 00:30:21.214 03:18:16 -- common/autotest_common.sh@931 -- # uname 00:30:21.214 03:18:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:21.214 03:18:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2133495 00:30:21.214 03:18:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:21.214 03:18:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:21.214 03:18:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2133495' 00:30:21.214 killing process with pid 2133495 00:30:21.214 03:18:16 -- common/autotest_common.sh@945 -- # kill 2133495 00:30:21.214 [2024-07-14 03:18:16.357010] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:21.214 03:18:16 -- common/autotest_common.sh@950 -- # wait 2133495 00:30:21.473 03:18:16 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:21.473 03:18:16 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:21.473 03:18:16 -- spdkcli/common.sh@13 -- # '[' -n 2133495 ']' 00:30:21.473 03:18:16 -- spdkcli/common.sh@14 -- # killprocess 2133495 00:30:21.473 03:18:16 -- common/autotest_common.sh@926 -- # '[' -z 2133495 ']' 00:30:21.473 03:18:16 -- common/autotest_common.sh@930 -- # kill -0 2133495 00:30:21.473 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2133495) - No such process 00:30:21.473 03:18:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2133495 is not found' 00:30:21.473 Process with pid 2133495 is not found 00:30:21.473 03:18:16 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:21.473 03:18:16 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:21.473 03:18:16 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:21.473 00:30:21.473 real 0m16.758s 00:30:21.473 user 0m35.570s 00:30:21.473 sys 0m0.807s 00:30:21.473 03:18:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.473 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:30:21.473 ************************************ 00:30:21.473 END TEST spdkcli_nvmf_tcp 00:30:21.473 ************************************ 00:30:21.473 03:18:16 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:21.473 03:18:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:21.473 03:18:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:21.473 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:30:21.473 ************************************ 00:30:21.473 START TEST nvmf_identify_passthru 00:30:21.473 ************************************ 00:30:21.473 03:18:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:21.473 * Looking for test storage... 00:30:21.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:21.473 03:18:16 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:21.473 03:18:16 -- nvmf/common.sh@7 -- # uname -s 00:30:21.473 03:18:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:21.473 03:18:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:21.473 03:18:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:21.473 03:18:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:21.473 03:18:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:21.473 03:18:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:21.473 03:18:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:21.473 03:18:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:21.473 03:18:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:21.473 03:18:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:21.473 03:18:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.473 03:18:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:21.473 03:18:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:21.473 03:18:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:21.473 03:18:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:21.473 03:18:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.474 03:18:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.474 03:18:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.474 03:18:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.474 03:18:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@5 -- # export PATH 00:30:21.474 03:18:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- nvmf/common.sh@46 -- # : 0 00:30:21.474 03:18:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:21.474 03:18:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:21.474 03:18:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:21.474 03:18:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:21.474 03:18:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:21.474 03:18:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:21.474 03:18:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:21.474 03:18:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:21.474 03:18:16 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:21.474 03:18:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:21.474 03:18:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:21.474 03:18:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:21.474 03:18:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- paths/export.sh@5 -- # export PATH 00:30:21.474 03:18:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:21.474 03:18:16 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:21.474 03:18:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:21.474 03:18:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:21.474 03:18:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:21.474 03:18:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:21.474 03:18:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:21.474 03:18:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:21.474 03:18:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:21.474 03:18:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:21.474 03:18:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:21.474 03:18:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:21.474 03:18:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:21.474 03:18:16 -- common/autotest_common.sh@10 -- # set +x 00:30:23.377 03:18:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:23.377 03:18:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:23.377 03:18:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:23.377 03:18:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:23.377 03:18:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:23.377 03:18:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:23.377 03:18:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:23.377 03:18:18 -- nvmf/common.sh@294 -- # net_devs=() 00:30:23.377 03:18:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:23.377 03:18:18 -- nvmf/common.sh@295 -- # e810=() 00:30:23.377 03:18:18 -- nvmf/common.sh@295 -- # local -ga e810 00:30:23.377 03:18:18 -- nvmf/common.sh@296 -- # x722=() 00:30:23.377 03:18:18 -- nvmf/common.sh@296 -- # local -ga x722 00:30:23.377 03:18:18 -- nvmf/common.sh@297 -- # mlx=() 00:30:23.377 03:18:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:23.377 03:18:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.377 03:18:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:23.377 03:18:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:23.377 03:18:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:23.377 03:18:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:23.377 03:18:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:23.377 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:23.377 03:18:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:23.377 03:18:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:23.377 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:23.377 03:18:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:23.377 03:18:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:23.377 03:18:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.377 03:18:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:23.377 03:18:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.377 03:18:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:23.377 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:23.377 03:18:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.377 03:18:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:23.377 03:18:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.377 03:18:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:23.377 03:18:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.377 03:18:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:23.377 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:23.377 03:18:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.377 03:18:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:23.377 03:18:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:23.377 03:18:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:23.377 03:18:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:23.377 03:18:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.378 03:18:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.378 03:18:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.378 03:18:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:23.378 03:18:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.378 03:18:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.378 03:18:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:23.378 03:18:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.378 03:18:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.378 03:18:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:23.378 03:18:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:23.378 03:18:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.378 03:18:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.636 03:18:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.636 03:18:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.636 03:18:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:23.636 03:18:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.636 03:18:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.636 03:18:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.636 03:18:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:23.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:30:23.636 00:30:23.636 --- 10.0.0.2 ping statistics --- 00:30:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.636 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:30:23.636 03:18:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:30:23.636 00:30:23.636 --- 10.0.0.1 ping statistics --- 00:30:23.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.636 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:23.636 03:18:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.636 03:18:18 -- nvmf/common.sh@410 -- # return 0 00:30:23.636 03:18:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:23.636 03:18:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.636 03:18:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:23.636 03:18:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:23.636 03:18:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.636 03:18:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:23.636 03:18:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:23.636 03:18:18 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:23.636 03:18:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:23.636 03:18:18 -- common/autotest_common.sh@10 -- # set +x 00:30:23.636 03:18:18 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:23.636 03:18:18 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:23.636 03:18:18 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:23.636 03:18:18 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:23.637 03:18:18 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:23.637 03:18:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:23.637 03:18:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:23.637 03:18:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:23.637 03:18:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.637 03:18:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:23.637 03:18:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:23.637 03:18:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:23.637 03:18:18 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:23.637 03:18:18 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:23.637 03:18:18 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:23.637 03:18:18 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:23.637 03:18:18 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:23.637 03:18:18 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:23.637 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.824 03:18:22 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:27.824 03:18:22 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:27.824 03:18:22 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:27.824 03:18:22 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:27.824 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.005 03:18:27 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:32.005 03:18:27 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:32.005 03:18:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:32.005 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.005 03:18:27 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:32.005 03:18:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:32.005 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.005 03:18:27 -- target/identify_passthru.sh@31 -- # nvmfpid=2138840 00:30:32.005 03:18:27 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:32.005 03:18:27 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:32.005 03:18:27 -- target/identify_passthru.sh@35 -- # waitforlisten 2138840 00:30:32.005 03:18:27 -- common/autotest_common.sh@819 -- # '[' -z 2138840 ']' 00:30:32.005 03:18:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.005 03:18:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:32.005 03:18:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.005 03:18:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:32.005 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.005 [2024-07-14 03:18:27.244446] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:32.005 [2024-07-14 03:18:27.244517] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:32.263 EAL: No free 2048 kB hugepages reported on node 1 00:30:32.263 [2024-07-14 03:18:27.308716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:32.263 [2024-07-14 03:18:27.391487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:32.263 [2024-07-14 03:18:27.391634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:32.263 [2024-07-14 03:18:27.391651] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:32.263 [2024-07-14 03:18:27.391662] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:32.263 [2024-07-14 03:18:27.391713] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.263 [2024-07-14 03:18:27.391770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.263 [2024-07-14 03:18:27.391836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:32.263 [2024-07-14 03:18:27.391838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:32.263 03:18:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:32.263 03:18:27 -- common/autotest_common.sh@852 -- # return 0 00:30:32.263 03:18:27 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:32.263 03:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.263 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.264 INFO: Log level set to 20 00:30:32.264 INFO: Requests: 00:30:32.264 { 00:30:32.264 "jsonrpc": "2.0", 00:30:32.264 "method": "nvmf_set_config", 00:30:32.264 "id": 1, 00:30:32.264 "params": { 00:30:32.264 "admin_cmd_passthru": { 00:30:32.264 "identify_ctrlr": true 00:30:32.264 } 00:30:32.264 } 00:30:32.264 } 00:30:32.264 00:30:32.264 INFO: response: 00:30:32.264 { 00:30:32.264 "jsonrpc": "2.0", 00:30:32.264 "id": 1, 00:30:32.264 "result": true 00:30:32.264 } 00:30:32.264 00:30:32.264 03:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.264 03:18:27 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:32.264 03:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.264 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.264 INFO: Setting log level to 20 00:30:32.264 INFO: Setting log level to 20 00:30:32.264 INFO: Log level set to 20 00:30:32.264 INFO: Log level set to 20 00:30:32.264 INFO: Requests: 00:30:32.264 { 00:30:32.264 "jsonrpc": "2.0", 00:30:32.264 "method": "framework_start_init", 00:30:32.264 "id": 1 00:30:32.264 } 00:30:32.264 00:30:32.264 INFO: Requests: 00:30:32.264 { 00:30:32.264 "jsonrpc": "2.0", 00:30:32.264 "method": "framework_start_init", 00:30:32.264 "id": 1 00:30:32.264 } 00:30:32.264 00:30:32.522 [2024-07-14 03:18:27.565249] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:32.522 INFO: response: 00:30:32.522 { 00:30:32.522 "jsonrpc": "2.0", 00:30:32.522 "id": 1, 00:30:32.522 "result": true 00:30:32.522 } 00:30:32.522 00:30:32.522 INFO: response: 00:30:32.522 { 00:30:32.522 "jsonrpc": "2.0", 00:30:32.522 "id": 1, 00:30:32.522 "result": true 00:30:32.522 } 00:30:32.522 00:30:32.522 03:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.522 03:18:27 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:32.522 03:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.522 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.522 INFO: Setting log level to 40 00:30:32.522 INFO: Setting log level to 40 00:30:32.522 INFO: Setting log level to 40 00:30:32.522 [2024-07-14 03:18:27.575301] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:32.522 03:18:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:32.522 03:18:27 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:32.522 03:18:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:32.522 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:32.523 03:18:27 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:32.523 03:18:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:32.523 03:18:27 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 Nvme0n1 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:35.801 03:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.801 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:35.801 03:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.801 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:35.801 03:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.801 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 [2024-07-14 03:18:30.464122] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:35.801 03:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.801 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 [2024-07-14 03:18:30.471790] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:35.801 [ 00:30:35.801 { 00:30:35.801 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:35.801 "subtype": "Discovery", 00:30:35.801 "listen_addresses": [], 00:30:35.801 "allow_any_host": true, 00:30:35.801 "hosts": [] 00:30:35.801 }, 00:30:35.801 { 00:30:35.801 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:35.801 "subtype": "NVMe", 00:30:35.801 "listen_addresses": [ 00:30:35.801 { 00:30:35.801 "transport": "TCP", 00:30:35.801 "trtype": "TCP", 00:30:35.801 "adrfam": "IPv4", 00:30:35.801 "traddr": "10.0.0.2", 00:30:35.801 "trsvcid": "4420" 00:30:35.801 } 00:30:35.801 ], 00:30:35.801 "allow_any_host": true, 00:30:35.801 "hosts": [], 00:30:35.801 "serial_number": "SPDK00000000000001", 00:30:35.801 "model_number": "SPDK bdev Controller", 00:30:35.801 "max_namespaces": 1, 00:30:35.801 "min_cntlid": 1, 00:30:35.801 "max_cntlid": 65519, 00:30:35.801 "namespaces": [ 00:30:35.801 { 00:30:35.801 "nsid": 1, 00:30:35.801 "bdev_name": "Nvme0n1", 00:30:35.801 "name": "Nvme0n1", 00:30:35.801 "nguid": "FCFDF49B5D3E4FA489AC24675B939BDD", 00:30:35.801 "uuid": "fcfdf49b-5d3e-4fa4-89ac-24675b939bdd" 00:30:35.801 } 00:30:35.801 ] 00:30:35.801 } 00:30:35.801 ] 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:35.801 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.801 03:18:30 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:35.801 03:18:30 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:35.801 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.801 03:18:30 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:35.801 03:18:30 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:35.801 03:18:30 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.801 03:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:35.801 03:18:30 -- common/autotest_common.sh@10 -- # set +x 00:30:35.801 03:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:35.801 03:18:30 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:35.801 03:18:30 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:35.801 03:18:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:35.801 03:18:30 -- nvmf/common.sh@116 -- # sync 00:30:35.801 03:18:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:35.801 03:18:30 -- nvmf/common.sh@119 -- # set +e 00:30:35.801 03:18:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:35.801 03:18:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:35.801 rmmod nvme_tcp 00:30:35.801 rmmod nvme_fabrics 00:30:35.801 rmmod nvme_keyring 00:30:35.801 03:18:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:35.801 03:18:30 -- nvmf/common.sh@123 -- # set -e 00:30:35.801 03:18:30 -- nvmf/common.sh@124 -- # return 0 00:30:35.801 03:18:30 -- nvmf/common.sh@477 -- # '[' -n 2138840 ']' 00:30:35.801 03:18:30 -- nvmf/common.sh@478 -- # killprocess 2138840 00:30:35.801 03:18:30 -- common/autotest_common.sh@926 -- # '[' -z 2138840 ']' 00:30:35.801 03:18:30 -- common/autotest_common.sh@930 -- # kill -0 2138840 00:30:35.801 03:18:30 -- common/autotest_common.sh@931 -- # uname 00:30:35.801 03:18:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:35.801 03:18:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138840 00:30:35.801 03:18:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:35.801 03:18:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:35.801 03:18:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138840' 00:30:35.801 killing process with pid 2138840 00:30:35.801 03:18:30 -- common/autotest_common.sh@945 -- # kill 2138840 00:30:35.801 [2024-07-14 03:18:30.772267] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:35.801 03:18:30 -- common/autotest_common.sh@950 -- # wait 2138840 00:30:37.227 03:18:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:37.227 03:18:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:37.227 03:18:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:37.227 03:18:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.227 03:18:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:37.227 03:18:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.227 03:18:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:37.227 03:18:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.759 03:18:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:39.759 00:30:39.759 real 0m17.791s 00:30:39.759 user 0m26.270s 00:30:39.759 sys 0m2.245s 00:30:39.759 03:18:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.759 03:18:34 -- common/autotest_common.sh@10 -- # set +x 00:30:39.759 ************************************ 00:30:39.759 END TEST nvmf_identify_passthru 00:30:39.759 ************************************ 00:30:39.759 03:18:34 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:39.759 03:18:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:39.759 03:18:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:39.759 03:18:34 -- common/autotest_common.sh@10 -- # set +x 00:30:39.759 ************************************ 00:30:39.759 START TEST nvmf_dif 00:30:39.759 ************************************ 00:30:39.759 03:18:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:39.759 * Looking for test storage... 00:30:39.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.759 03:18:34 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.759 03:18:34 -- nvmf/common.sh@7 -- # uname -s 00:30:39.759 03:18:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.759 03:18:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.759 03:18:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.759 03:18:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.759 03:18:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.759 03:18:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.759 03:18:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.759 03:18:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.759 03:18:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.759 03:18:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.759 03:18:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.759 03:18:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.759 03:18:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.759 03:18:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.759 03:18:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.759 03:18:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.759 03:18:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.759 03:18:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.759 03:18:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.759 03:18:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.759 03:18:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.759 03:18:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.759 03:18:34 -- paths/export.sh@5 -- # export PATH 00:30:39.759 03:18:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.759 03:18:34 -- nvmf/common.sh@46 -- # : 0 00:30:39.759 03:18:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:39.759 03:18:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:39.759 03:18:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:39.759 03:18:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.759 03:18:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.759 03:18:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:39.759 03:18:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:39.759 03:18:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:39.759 03:18:34 -- target/dif.sh@15 -- # NULL_META=16 00:30:39.759 03:18:34 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:39.759 03:18:34 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:39.759 03:18:34 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:39.759 03:18:34 -- target/dif.sh@135 -- # nvmftestinit 00:30:39.759 03:18:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:39.759 03:18:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.759 03:18:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:39.759 03:18:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:39.759 03:18:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:39.759 03:18:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.759 03:18:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.759 03:18:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.759 03:18:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:39.759 03:18:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:39.759 03:18:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:39.759 03:18:34 -- common/autotest_common.sh@10 -- # set +x 00:30:41.661 03:18:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:41.661 03:18:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:41.661 03:18:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:41.661 03:18:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:41.661 03:18:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:41.661 03:18:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:41.661 03:18:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:41.661 03:18:36 -- nvmf/common.sh@294 -- # net_devs=() 00:30:41.661 03:18:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:41.661 03:18:36 -- nvmf/common.sh@295 -- # e810=() 00:30:41.661 03:18:36 -- nvmf/common.sh@295 -- # local -ga e810 00:30:41.661 03:18:36 -- nvmf/common.sh@296 -- # x722=() 00:30:41.661 03:18:36 -- nvmf/common.sh@296 -- # local -ga x722 00:30:41.661 03:18:36 -- nvmf/common.sh@297 -- # mlx=() 00:30:41.661 03:18:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:41.661 03:18:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.661 03:18:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:41.661 03:18:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:41.661 03:18:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.661 03:18:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.661 03:18:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:41.661 03:18:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.661 03:18:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.661 03:18:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.661 03:18:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.661 03:18:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.661 03:18:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.661 03:18:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:41.661 03:18:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.661 03:18:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.661 03:18:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.661 03:18:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.661 03:18:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:41.661 03:18:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:41.661 03:18:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:41.661 03:18:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.661 03:18:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.661 03:18:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.661 03:18:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:41.661 03:18:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.661 03:18:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.661 03:18:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:41.661 03:18:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.661 03:18:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.661 03:18:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:41.661 03:18:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:41.661 03:18:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.661 03:18:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.661 03:18:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.661 03:18:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.661 03:18:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:41.661 03:18:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.661 03:18:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.661 03:18:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.661 03:18:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:41.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:30:41.661 00:30:41.661 --- 10.0.0.2 ping statistics --- 00:30:41.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.661 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:30:41.662 03:18:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:30:41.662 00:30:41.662 --- 10.0.0.1 ping statistics --- 00:30:41.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.662 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:30:41.662 03:18:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.662 03:18:36 -- nvmf/common.sh@410 -- # return 0 00:30:41.662 03:18:36 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:41.662 03:18:36 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:42.597 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:42.597 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:42.597 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:42.597 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:42.597 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:42.597 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:42.597 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:42.597 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:42.597 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:42.597 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:42.597 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:42.597 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:42.597 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:42.597 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:42.597 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:42.597 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:42.597 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:42.597 03:18:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:42.597 03:18:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:42.597 03:18:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:42.597 03:18:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:42.597 03:18:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:42.597 03:18:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:42.597 03:18:37 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:42.597 03:18:37 -- target/dif.sh@137 -- # nvmfappstart 00:30:42.597 03:18:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:42.597 03:18:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:42.597 03:18:37 -- common/autotest_common.sh@10 -- # set +x 00:30:42.597 03:18:37 -- nvmf/common.sh@469 -- # nvmfpid=2142167 00:30:42.597 03:18:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:42.597 03:18:37 -- nvmf/common.sh@470 -- # waitforlisten 2142167 00:30:42.597 03:18:37 -- common/autotest_common.sh@819 -- # '[' -z 2142167 ']' 00:30:42.597 03:18:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:42.597 03:18:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:42.597 03:18:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:42.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:42.597 03:18:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:42.597 03:18:37 -- common/autotest_common.sh@10 -- # set +x 00:30:42.856 [2024-07-14 03:18:37.872400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:30:42.856 [2024-07-14 03:18:37.872496] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:42.856 EAL: No free 2048 kB hugepages reported on node 1 00:30:42.856 [2024-07-14 03:18:37.942365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:42.856 [2024-07-14 03:18:38.030322] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:42.856 [2024-07-14 03:18:38.030498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:42.856 [2024-07-14 03:18:38.030519] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:42.856 [2024-07-14 03:18:38.030534] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:42.856 [2024-07-14 03:18:38.030565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.790 03:18:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:43.790 03:18:38 -- common/autotest_common.sh@852 -- # return 0 00:30:43.790 03:18:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:43.790 03:18:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 03:18:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.790 03:18:38 -- target/dif.sh@139 -- # create_transport 00:30:43.790 03:18:38 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:43.790 03:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 [2024-07-14 03:18:38.817628] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.790 03:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.790 03:18:38 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:43.790 03:18:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:43.790 03:18:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 ************************************ 00:30:43.790 START TEST fio_dif_1_default 00:30:43.790 ************************************ 00:30:43.790 03:18:38 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:43.790 03:18:38 -- target/dif.sh@86 -- # create_subsystems 0 00:30:43.790 03:18:38 -- target/dif.sh@28 -- # local sub 00:30:43.790 03:18:38 -- target/dif.sh@30 -- # for sub in "$@" 00:30:43.790 03:18:38 -- target/dif.sh@31 -- # create_subsystem 0 00:30:43.790 03:18:38 -- target/dif.sh@18 -- # local sub_id=0 00:30:43.790 03:18:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:43.790 03:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 bdev_null0 00:30:43.790 03:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.790 03:18:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:43.790 03:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 03:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.790 03:18:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:43.790 03:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 03:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.790 03:18:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:43.790 03:18:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:43.790 03:18:38 -- common/autotest_common.sh@10 -- # set +x 00:30:43.790 [2024-07-14 03:18:38.853874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.790 03:18:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:43.790 03:18:38 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:43.790 03:18:38 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:43.790 03:18:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:43.790 03:18:38 -- nvmf/common.sh@520 -- # config=() 00:30:43.790 03:18:38 -- nvmf/common.sh@520 -- # local subsystem config 00:30:43.790 03:18:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:43.790 03:18:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.790 03:18:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:43.790 { 00:30:43.790 "params": { 00:30:43.790 "name": "Nvme$subsystem", 00:30:43.790 "trtype": "$TEST_TRANSPORT", 00:30:43.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:43.790 "adrfam": "ipv4", 00:30:43.790 "trsvcid": "$NVMF_PORT", 00:30:43.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:43.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:43.790 "hdgst": ${hdgst:-false}, 00:30:43.790 "ddgst": ${ddgst:-false} 00:30:43.790 }, 00:30:43.790 "method": "bdev_nvme_attach_controller" 00:30:43.790 } 00:30:43.790 EOF 00:30:43.790 )") 00:30:43.790 03:18:38 -- target/dif.sh@82 -- # gen_fio_conf 00:30:43.790 03:18:38 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:43.790 03:18:38 -- target/dif.sh@54 -- # local file 00:30:43.791 03:18:38 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:43.791 03:18:38 -- target/dif.sh@56 -- # cat 00:30:43.791 03:18:38 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:43.791 03:18:38 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:43.791 03:18:38 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.791 03:18:38 -- common/autotest_common.sh@1320 -- # shift 00:30:43.791 03:18:38 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:43.791 03:18:38 -- nvmf/common.sh@542 -- # cat 00:30:43.791 03:18:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.791 03:18:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:43.791 03:18:38 -- target/dif.sh@72 -- # (( file <= files )) 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:43.791 03:18:38 -- nvmf/common.sh@544 -- # jq . 00:30:43.791 03:18:38 -- nvmf/common.sh@545 -- # IFS=, 00:30:43.791 03:18:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:43.791 "params": { 00:30:43.791 "name": "Nvme0", 00:30:43.791 "trtype": "tcp", 00:30:43.791 "traddr": "10.0.0.2", 00:30:43.791 "adrfam": "ipv4", 00:30:43.791 "trsvcid": "4420", 00:30:43.791 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:43.791 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:43.791 "hdgst": false, 00:30:43.791 "ddgst": false 00:30:43.791 }, 00:30:43.791 "method": "bdev_nvme_attach_controller" 00:30:43.791 }' 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:43.791 03:18:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:43.791 03:18:38 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:43.791 03:18:38 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:43.791 03:18:38 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:43.791 03:18:38 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:43.791 03:18:38 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:44.049 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:44.049 fio-3.35 00:30:44.049 Starting 1 thread 00:30:44.049 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.616 [2024-07-14 03:18:39.590212] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:44.616 [2024-07-14 03:18:39.590284] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:54.584 00:30:54.584 filename0: (groupid=0, jobs=1): err= 0: pid=2142410: Sun Jul 14 03:18:49 2024 00:30:54.584 read: IOPS=137, BW=552KiB/s (565kB/s)(5520KiB/10003msec) 00:30:54.584 slat (nsec): min=4544, max=43610, avg=10130.69, stdev=4425.69 00:30:54.584 clat (usec): min=866, max=45116, avg=28960.95, stdev=18915.82 00:30:54.584 lat (usec): min=873, max=45131, avg=28971.08, stdev=18915.48 00:30:54.584 clat percentiles (usec): 00:30:54.584 | 1.00th=[ 881], 5.00th=[ 906], 10.00th=[ 922], 20.00th=[ 979], 00:30:54.584 | 30.00th=[ 1057], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:30:54.584 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:30:54.584 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:30:54.584 | 99.99th=[45351] 00:30:54.584 bw ( KiB/s): min= 351, max= 768, per=99.31%, avg=549.00, stdev=177.35, samples=19 00:30:54.584 iops : min= 87, max= 192, avg=137.21, stdev=44.39, samples=19 00:30:54.584 lat (usec) : 1000=25.07% 00:30:54.584 lat (msec) : 2=6.23%, 50=68.70% 00:30:54.584 cpu : usr=90.37%, sys=9.36%, ctx=12, majf=0, minf=265 00:30:54.584 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:54.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:54.584 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:54.584 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:54.584 00:30:54.584 Run status group 0 (all jobs): 00:30:54.584 READ: bw=552KiB/s (565kB/s), 552KiB/s-552KiB/s (565kB/s-565kB/s), io=5520KiB (5652kB), run=10003-10003msec 00:30:54.843 03:18:49 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:54.843 03:18:49 -- target/dif.sh@43 -- # local sub 00:30:54.843 03:18:49 -- target/dif.sh@45 -- # for sub in "$@" 00:30:54.843 03:18:49 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:54.843 03:18:49 -- target/dif.sh@36 -- # local sub_id=0 00:30:54.843 03:18:49 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 03:18:49 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 00:30:54.843 real 0m11.096s 00:30:54.843 user 0m10.031s 00:30:54.843 sys 0m1.208s 00:30:54.843 03:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 ************************************ 00:30:54.843 END TEST fio_dif_1_default 00:30:54.843 ************************************ 00:30:54.843 03:18:49 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:54.843 03:18:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:54.843 03:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 ************************************ 00:30:54.843 START TEST fio_dif_1_multi_subsystems 00:30:54.843 ************************************ 00:30:54.843 03:18:49 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:30:54.843 03:18:49 -- target/dif.sh@92 -- # local files=1 00:30:54.843 03:18:49 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:54.843 03:18:49 -- target/dif.sh@28 -- # local sub 00:30:54.843 03:18:49 -- target/dif.sh@30 -- # for sub in "$@" 00:30:54.843 03:18:49 -- target/dif.sh@31 -- # create_subsystem 0 00:30:54.843 03:18:49 -- target/dif.sh@18 -- # local sub_id=0 00:30:54.843 03:18:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 bdev_null0 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 03:18:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 03:18:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 03:18:49 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 [2024-07-14 03:18:49.974352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.843 03:18:49 -- target/dif.sh@30 -- # for sub in "$@" 00:30:54.843 03:18:49 -- target/dif.sh@31 -- # create_subsystem 1 00:30:54.843 03:18:49 -- target/dif.sh@18 -- # local sub_id=1 00:30:54.843 03:18:49 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:54.843 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.843 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.843 bdev_null1 00:30:54.843 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.844 03:18:49 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:54.844 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.844 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.844 03:18:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.844 03:18:49 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:54.844 03:18:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.844 03:18:49 -- common/autotest_common.sh@10 -- # set +x 00:30:54.844 03:18:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.844 03:18:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:54.844 03:18:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:54.844 03:18:50 -- common/autotest_common.sh@10 -- # set +x 00:30:54.844 03:18:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:54.844 03:18:50 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:54.844 03:18:50 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:54.844 03:18:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:54.844 03:18:50 -- nvmf/common.sh@520 -- # config=() 00:30:54.844 03:18:50 -- nvmf/common.sh@520 -- # local subsystem config 00:30:54.844 03:18:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:54.844 03:18:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:54.844 03:18:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:54.844 { 00:30:54.844 "params": { 00:30:54.844 "name": "Nvme$subsystem", 00:30:54.844 "trtype": "$TEST_TRANSPORT", 00:30:54.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.844 "adrfam": "ipv4", 00:30:54.844 "trsvcid": "$NVMF_PORT", 00:30:54.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.844 "hdgst": ${hdgst:-false}, 00:30:54.844 "ddgst": ${ddgst:-false} 00:30:54.844 }, 00:30:54.844 "method": "bdev_nvme_attach_controller" 00:30:54.844 } 00:30:54.844 EOF 00:30:54.844 )") 00:30:54.844 03:18:50 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:54.844 03:18:50 -- target/dif.sh@82 -- # gen_fio_conf 00:30:54.844 03:18:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:54.844 03:18:50 -- target/dif.sh@54 -- # local file 00:30:54.844 03:18:50 -- target/dif.sh@56 -- # cat 00:30:54.844 03:18:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:54.844 03:18:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:54.844 03:18:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:54.844 03:18:50 -- common/autotest_common.sh@1320 -- # shift 00:30:54.844 03:18:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:54.844 03:18:50 -- nvmf/common.sh@542 -- # cat 00:30:54.844 03:18:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:54.844 03:18:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:54.844 03:18:50 -- target/dif.sh@72 -- # (( file <= files )) 00:30:54.844 03:18:50 -- target/dif.sh@73 -- # cat 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:54.844 03:18:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:54.844 03:18:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:54.844 { 00:30:54.844 "params": { 00:30:54.844 "name": "Nvme$subsystem", 00:30:54.844 "trtype": "$TEST_TRANSPORT", 00:30:54.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:54.844 "adrfam": "ipv4", 00:30:54.844 "trsvcid": "$NVMF_PORT", 00:30:54.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:54.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:54.844 "hdgst": ${hdgst:-false}, 00:30:54.844 "ddgst": ${ddgst:-false} 00:30:54.844 }, 00:30:54.844 "method": "bdev_nvme_attach_controller" 00:30:54.844 } 00:30:54.844 EOF 00:30:54.844 )") 00:30:54.844 03:18:50 -- nvmf/common.sh@542 -- # cat 00:30:54.844 03:18:50 -- target/dif.sh@72 -- # (( file++ )) 00:30:54.844 03:18:50 -- target/dif.sh@72 -- # (( file <= files )) 00:30:54.844 03:18:50 -- nvmf/common.sh@544 -- # jq . 00:30:54.844 03:18:50 -- nvmf/common.sh@545 -- # IFS=, 00:30:54.844 03:18:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:54.844 "params": { 00:30:54.844 "name": "Nvme0", 00:30:54.844 "trtype": "tcp", 00:30:54.844 "traddr": "10.0.0.2", 00:30:54.844 "adrfam": "ipv4", 00:30:54.844 "trsvcid": "4420", 00:30:54.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:54.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:54.844 "hdgst": false, 00:30:54.844 "ddgst": false 00:30:54.844 }, 00:30:54.844 "method": "bdev_nvme_attach_controller" 00:30:54.844 },{ 00:30:54.844 "params": { 00:30:54.844 "name": "Nvme1", 00:30:54.844 "trtype": "tcp", 00:30:54.844 "traddr": "10.0.0.2", 00:30:54.844 "adrfam": "ipv4", 00:30:54.844 "trsvcid": "4420", 00:30:54.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:54.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:54.844 "hdgst": false, 00:30:54.844 "ddgst": false 00:30:54.844 }, 00:30:54.844 "method": "bdev_nvme_attach_controller" 00:30:54.844 }' 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:54.844 03:18:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:54.844 03:18:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:54.844 03:18:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:54.844 03:18:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:54.844 03:18:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:54.844 03:18:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:55.103 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.103 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:55.103 fio-3.35 00:30:55.103 Starting 2 threads 00:30:55.103 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.035 [2024-07-14 03:18:50.977470] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:56.036 [2024-07-14 03:18:50.977553] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:06.000 00:31:06.000 filename0: (groupid=0, jobs=1): err= 0: pid=2143849: Sun Jul 14 03:19:01 2024 00:31:06.000 read: IOPS=95, BW=383KiB/s (393kB/s)(3840KiB/10017msec) 00:31:06.000 slat (nsec): min=4742, max=66653, avg=12406.38, stdev=6496.71 00:31:06.000 clat (usec): min=40901, max=43178, avg=41697.35, stdev=446.78 00:31:06.000 lat (usec): min=40923, max=43239, avg=41709.76, stdev=448.30 00:31:06.000 clat percentiles (usec): 00:31:06.000 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:06.000 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:31:06.000 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:06.000 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:06.000 | 99.99th=[43254] 00:31:06.000 bw ( KiB/s): min= 352, max= 416, per=49.82%, avg=382.40, stdev=12.61, samples=20 00:31:06.000 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:31:06.000 lat (msec) : 50=100.00% 00:31:06.000 cpu : usr=94.58%, sys=5.12%, ctx=17, majf=0, minf=212 00:31:06.000 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.000 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.000 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:06.000 filename1: (groupid=0, jobs=1): err= 0: pid=2143850: Sun Jul 14 03:19:01 2024 00:31:06.000 read: IOPS=95, BW=383KiB/s (393kB/s)(3840KiB/10017msec) 00:31:06.000 slat (nsec): min=4433, max=51488, avg=12777.49, stdev=6363.35 00:31:06.000 clat (usec): min=40881, max=42966, avg=41696.40, stdev=454.99 00:31:06.000 lat (usec): min=40888, max=42981, avg=41709.18, stdev=455.38 00:31:06.000 clat percentiles (usec): 00:31:06.000 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:06.000 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:31:06.000 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:06.000 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:31:06.000 | 99.99th=[42730] 00:31:06.000 bw ( KiB/s): min= 352, max= 416, per=49.82%, avg=382.40, stdev=12.61, samples=20 00:31:06.000 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:31:06.000 lat (msec) : 50=100.00% 00:31:06.000 cpu : usr=94.71%, sys=4.98%, ctx=14, majf=0, minf=116 00:31:06.000 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.000 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.000 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.000 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.000 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:06.000 00:31:06.000 Run status group 0 (all jobs): 00:31:06.000 READ: bw=767KiB/s (785kB/s), 383KiB/s-383KiB/s (393kB/s-393kB/s), io=7680KiB (7864kB), run=10017-10017msec 00:31:06.296 03:19:01 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:06.296 03:19:01 -- target/dif.sh@43 -- # local sub 00:31:06.296 03:19:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.296 03:19:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.296 03:19:01 -- target/dif.sh@36 -- # local sub_id=0 00:31:06.296 03:19:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.296 03:19:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:06.296 03:19:01 -- target/dif.sh@36 -- # local sub_id=1 00:31:06.296 03:19:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 00:31:06.296 real 0m11.484s 00:31:06.296 user 0m20.394s 00:31:06.296 sys 0m1.291s 00:31:06.296 03:19:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 ************************************ 00:31:06.296 END TEST fio_dif_1_multi_subsystems 00:31:06.296 ************************************ 00:31:06.296 03:19:01 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:06.296 03:19:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:06.296 03:19:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 ************************************ 00:31:06.296 START TEST fio_dif_rand_params 00:31:06.296 ************************************ 00:31:06.296 03:19:01 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:06.296 03:19:01 -- target/dif.sh@100 -- # local NULL_DIF 00:31:06.296 03:19:01 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:06.296 03:19:01 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:06.296 03:19:01 -- target/dif.sh@103 -- # bs=128k 00:31:06.296 03:19:01 -- target/dif.sh@103 -- # numjobs=3 00:31:06.296 03:19:01 -- target/dif.sh@103 -- # iodepth=3 00:31:06.296 03:19:01 -- target/dif.sh@103 -- # runtime=5 00:31:06.296 03:19:01 -- target/dif.sh@105 -- # create_subsystems 0 00:31:06.296 03:19:01 -- target/dif.sh@28 -- # local sub 00:31:06.296 03:19:01 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.296 03:19:01 -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.296 03:19:01 -- target/dif.sh@18 -- # local sub_id=0 00:31:06.296 03:19:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 bdev_null0 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.296 03:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.296 03:19:01 -- common/autotest_common.sh@10 -- # set +x 00:31:06.296 [2024-07-14 03:19:01.482331] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.296 03:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.296 03:19:01 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:06.296 03:19:01 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:06.296 03:19:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:06.296 03:19:01 -- nvmf/common.sh@520 -- # config=() 00:31:06.296 03:19:01 -- nvmf/common.sh@520 -- # local subsystem config 00:31:06.296 03:19:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.296 03:19:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.296 03:19:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.296 { 00:31:06.296 "params": { 00:31:06.296 "name": "Nvme$subsystem", 00:31:06.296 "trtype": "$TEST_TRANSPORT", 00:31:06.296 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.296 "adrfam": "ipv4", 00:31:06.296 "trsvcid": "$NVMF_PORT", 00:31:06.296 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.296 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.296 "hdgst": ${hdgst:-false}, 00:31:06.296 "ddgst": ${ddgst:-false} 00:31:06.296 }, 00:31:06.296 "method": "bdev_nvme_attach_controller" 00:31:06.296 } 00:31:06.296 EOF 00:31:06.296 )") 00:31:06.296 03:19:01 -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.296 03:19:01 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.296 03:19:01 -- target/dif.sh@54 -- # local file 00:31:06.296 03:19:01 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:06.296 03:19:01 -- target/dif.sh@56 -- # cat 00:31:06.296 03:19:01 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.296 03:19:01 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:06.296 03:19:01 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.296 03:19:01 -- common/autotest_common.sh@1320 -- # shift 00:31:06.296 03:19:01 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:06.296 03:19:01 -- nvmf/common.sh@542 -- # cat 00:31:06.296 03:19:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.296 03:19:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.296 03:19:01 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.296 03:19:01 -- nvmf/common.sh@544 -- # jq . 00:31:06.296 03:19:01 -- nvmf/common.sh@545 -- # IFS=, 00:31:06.296 03:19:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:06.296 "params": { 00:31:06.296 "name": "Nvme0", 00:31:06.296 "trtype": "tcp", 00:31:06.296 "traddr": "10.0.0.2", 00:31:06.296 "adrfam": "ipv4", 00:31:06.296 "trsvcid": "4420", 00:31:06.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.296 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.296 "hdgst": false, 00:31:06.296 "ddgst": false 00:31:06.296 }, 00:31:06.296 "method": "bdev_nvme_attach_controller" 00:31:06.296 }' 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.296 03:19:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.296 03:19:01 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:06.296 03:19:01 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.556 03:19:01 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.556 03:19:01 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.556 03:19:01 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.556 03:19:01 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.556 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:06.556 ... 00:31:06.556 fio-3.35 00:31:06.556 Starting 3 threads 00:31:06.556 EAL: No free 2048 kB hugepages reported on node 1 00:31:07.123 [2024-07-14 03:19:02.248851] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:07.123 [2024-07-14 03:19:02.248966] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:12.398 00:31:12.398 filename0: (groupid=0, jobs=1): err= 0: pid=2145288: Sun Jul 14 03:19:07 2024 00:31:12.398 read: IOPS=216, BW=27.1MiB/s (28.4MB/s)(136MiB/5009msec) 00:31:12.398 slat (nsec): min=4736, max=29034, avg=13034.01, stdev=2653.54 00:31:12.398 clat (usec): min=5418, max=55519, avg=13837.49, stdev=12644.66 00:31:12.398 lat (usec): min=5431, max=55527, avg=13850.52, stdev=12644.60 00:31:12.398 clat percentiles (usec): 00:31:12.398 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6587], 20.00th=[ 8455], 00:31:12.398 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:31:12.398 | 70.00th=[11469], 80.00th=[12518], 90.00th=[14615], 95.00th=[52167], 00:31:12.398 | 99.00th=[54264], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:31:12.398 | 99.99th=[55313] 00:31:12.398 bw ( KiB/s): min=15616, max=36608, per=37.21%, avg=27673.60, stdev=6997.80, samples=10 00:31:12.398 iops : min= 122, max= 286, avg=216.20, stdev=54.67, samples=10 00:31:12.398 lat (msec) : 10=53.97%, 20=36.25%, 50=1.75%, 100=8.03% 00:31:12.398 cpu : usr=88.60%, sys=8.41%, ctx=343, majf=0, minf=77 00:31:12.398 IO depths : 1=2.0%, 2=98.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 issued rwts: total=1084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:12.398 filename0: (groupid=0, jobs=1): err= 0: pid=2145289: Sun Jul 14 03:19:07 2024 00:31:12.398 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(148MiB/5043msec) 00:31:12.398 slat (nsec): min=4574, max=34534, avg=13716.87, stdev=2504.61 00:31:12.398 clat (usec): min=5333, max=95341, avg=12731.91, stdev=11509.50 00:31:12.398 lat (usec): min=5347, max=95355, avg=12745.62, stdev=11509.36 00:31:12.398 clat percentiles (usec): 00:31:12.398 | 1.00th=[ 5473], 5.00th=[ 5932], 10.00th=[ 6259], 20.00th=[ 7242], 00:31:12.398 | 30.00th=[ 8848], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10421], 00:31:12.398 | 70.00th=[11207], 80.00th=[12649], 90.00th=[14484], 95.00th=[50594], 00:31:12.398 | 99.00th=[55313], 99.50th=[57410], 99.90th=[94897], 99.95th=[94897], 00:31:12.398 | 99.99th=[94897] 00:31:12.398 bw ( KiB/s): min=19712, max=38912, per=40.52%, avg=30136.30, stdev=5643.03, samples=10 00:31:12.398 iops : min= 154, max= 304, avg=235.40, stdev=44.12, samples=10 00:31:12.398 lat (msec) : 10=51.36%, 20=41.69%, 50=1.36%, 100=5.59% 00:31:12.398 cpu : usr=92.68%, sys=6.41%, ctx=9, majf=0, minf=135 00:31:12.398 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 issued rwts: total=1180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:12.398 filename0: (groupid=0, jobs=1): err= 0: pid=2145290: Sun Jul 14 03:19:07 2024 00:31:12.398 read: IOPS=132, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5009msec) 00:31:12.398 slat (nsec): min=4725, max=31453, avg=12950.33, stdev=2428.98 00:31:12.398 clat (usec): min=6931, max=96274, avg=22542.07, stdev=20400.72 00:31:12.398 lat (usec): min=6944, max=96287, avg=22555.02, stdev=20400.67 00:31:12.398 clat percentiles (usec): 00:31:12.398 | 1.00th=[ 7439], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10421], 00:31:12.398 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12256], 60.00th=[13042], 00:31:12.398 | 70.00th=[14222], 80.00th=[51643], 90.00th=[53740], 95.00th=[54264], 00:31:12.398 | 99.00th=[94897], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:31:12.398 | 99.99th=[95945] 00:31:12.398 bw ( KiB/s): min= 9984, max=26368, per=22.82%, avg=16972.80, stdev=4478.25, samples=10 00:31:12.398 iops : min= 78, max= 206, avg=132.60, stdev=34.99, samples=10 00:31:12.398 lat (msec) : 10=16.07%, 20=59.61%, 50=0.30%, 100=24.02% 00:31:12.398 cpu : usr=94.47%, sys=5.05%, ctx=16, majf=0, minf=54 00:31:12.398 IO depths : 1=2.6%, 2=97.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.398 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.398 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.398 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:12.398 00:31:12.398 Run status group 0 (all jobs): 00:31:12.398 READ: bw=72.6MiB/s (76.2MB/s), 16.6MiB/s-29.2MiB/s (17.4MB/s-30.7MB/s), io=366MiB (384MB), run=5009-5043msec 00:31:12.398 03:19:07 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:12.398 03:19:07 -- target/dif.sh@43 -- # local sub 00:31:12.398 03:19:07 -- target/dif.sh@45 -- # for sub in "$@" 00:31:12.398 03:19:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:12.398 03:19:07 -- target/dif.sh@36 -- # local sub_id=0 00:31:12.398 03:19:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:12.398 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.398 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.398 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.398 03:19:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:12.398 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.398 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.398 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # bs=4k 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # numjobs=8 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # iodepth=16 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # runtime= 00:31:12.398 03:19:07 -- target/dif.sh@109 -- # files=2 00:31:12.398 03:19:07 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:12.398 03:19:07 -- target/dif.sh@28 -- # local sub 00:31:12.398 03:19:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.398 03:19:07 -- target/dif.sh@31 -- # create_subsystem 0 00:31:12.398 03:19:07 -- target/dif.sh@18 -- # local sub_id=0 00:31:12.398 03:19:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:12.398 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.398 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.398 bdev_null0 00:31:12.398 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.398 03:19:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:12.398 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.398 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.398 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.398 03:19:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:12.398 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.399 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.399 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.399 03:19:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:12.399 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.399 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.399 [2024-07-14 03:19:07.623599] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.399 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.399 03:19:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.399 03:19:07 -- target/dif.sh@31 -- # create_subsystem 1 00:31:12.399 03:19:07 -- target/dif.sh@18 -- # local sub_id=1 00:31:12.399 03:19:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:12.399 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.399 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.399 bdev_null1 00:31:12.399 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.399 03:19:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:12.399 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.399 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.399 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.399 03:19:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:12.399 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.399 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.659 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.659 03:19:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:12.659 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.659 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.659 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.659 03:19:07 -- target/dif.sh@30 -- # for sub in "$@" 00:31:12.659 03:19:07 -- target/dif.sh@31 -- # create_subsystem 2 00:31:12.659 03:19:07 -- target/dif.sh@18 -- # local sub_id=2 00:31:12.659 03:19:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:12.659 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.659 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.659 bdev_null2 00:31:12.659 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.659 03:19:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:12.659 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.659 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.659 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.659 03:19:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:12.659 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.659 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.659 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.659 03:19:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:12.659 03:19:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:12.659 03:19:07 -- common/autotest_common.sh@10 -- # set +x 00:31:12.660 03:19:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:12.660 03:19:07 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:12.660 03:19:07 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:12.660 03:19:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:12.660 03:19:07 -- nvmf/common.sh@520 -- # config=() 00:31:12.660 03:19:07 -- nvmf/common.sh@520 -- # local subsystem config 00:31:12.660 03:19:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:12.660 03:19:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:12.660 { 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme$subsystem", 00:31:12.660 "trtype": "$TEST_TRANSPORT", 00:31:12.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "$NVMF_PORT", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.660 "hdgst": ${hdgst:-false}, 00:31:12.660 "ddgst": ${ddgst:-false} 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 } 00:31:12.660 EOF 00:31:12.660 )") 00:31:12.660 03:19:07 -- target/dif.sh@82 -- # gen_fio_conf 00:31:12.660 03:19:07 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.660 03:19:07 -- target/dif.sh@54 -- # local file 00:31:12.660 03:19:07 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:12.660 03:19:07 -- target/dif.sh@56 -- # cat 00:31:12.660 03:19:07 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:12.660 03:19:07 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:12.660 03:19:07 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.660 03:19:07 -- common/autotest_common.sh@1320 -- # shift 00:31:12.660 03:19:07 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:12.660 03:19:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # cat 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.660 03:19:07 -- target/dif.sh@73 -- # cat 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:12.660 03:19:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:12.660 { 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme$subsystem", 00:31:12.660 "trtype": "$TEST_TRANSPORT", 00:31:12.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "$NVMF_PORT", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.660 "hdgst": ${hdgst:-false}, 00:31:12.660 "ddgst": ${ddgst:-false} 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 } 00:31:12.660 EOF 00:31:12.660 )") 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # cat 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file++ )) 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.660 03:19:07 -- target/dif.sh@73 -- # cat 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file++ )) 00:31:12.660 03:19:07 -- target/dif.sh@72 -- # (( file <= files )) 00:31:12.660 03:19:07 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:12.660 { 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme$subsystem", 00:31:12.660 "trtype": "$TEST_TRANSPORT", 00:31:12.660 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "$NVMF_PORT", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.660 "hdgst": ${hdgst:-false}, 00:31:12.660 "ddgst": ${ddgst:-false} 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 } 00:31:12.660 EOF 00:31:12.660 )") 00:31:12.660 03:19:07 -- nvmf/common.sh@542 -- # cat 00:31:12.660 03:19:07 -- nvmf/common.sh@544 -- # jq . 00:31:12.660 03:19:07 -- nvmf/common.sh@545 -- # IFS=, 00:31:12.660 03:19:07 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme0", 00:31:12.660 "trtype": "tcp", 00:31:12.660 "traddr": "10.0.0.2", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "4420", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.660 "hdgst": false, 00:31:12.660 "ddgst": false 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 },{ 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme1", 00:31:12.660 "trtype": "tcp", 00:31:12.660 "traddr": "10.0.0.2", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "4420", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:12.660 "hdgst": false, 00:31:12.660 "ddgst": false 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 },{ 00:31:12.660 "params": { 00:31:12.660 "name": "Nvme2", 00:31:12.660 "trtype": "tcp", 00:31:12.660 "traddr": "10.0.0.2", 00:31:12.660 "adrfam": "ipv4", 00:31:12.660 "trsvcid": "4420", 00:31:12.660 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:12.660 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:12.660 "hdgst": false, 00:31:12.660 "ddgst": false 00:31:12.660 }, 00:31:12.660 "method": "bdev_nvme_attach_controller" 00:31:12.660 }' 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:12.660 03:19:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:12.660 03:19:07 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:12.660 03:19:07 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:12.660 03:19:07 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:12.660 03:19:07 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:12.660 03:19:07 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.921 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:12.921 ... 00:31:12.921 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:12.921 ... 00:31:12.921 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:12.921 ... 00:31:12.921 fio-3.35 00:31:12.921 Starting 24 threads 00:31:12.921 EAL: No free 2048 kB hugepages reported on node 1 00:31:13.489 [2024-07-14 03:19:08.707691] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:13.489 [2024-07-14 03:19:08.707781] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:25.700 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146178: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=498, BW=1992KiB/s (2040kB/s)(19.5MiB/10006msec) 00:31:25.700 slat (nsec): min=5834, max=87298, avg=38083.60, stdev=12235.45 00:31:25.700 clat (usec): min=16805, max=60963, avg=31817.22, stdev=3088.72 00:31:25.700 lat (usec): min=16861, max=60986, avg=31855.30, stdev=3086.92 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[19268], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33424], 00:31:25.700 | 99.00th=[39584], 99.50th=[54264], 99.90th=[61080], 99.95th=[61080], 00:31:25.700 | 99.99th=[61080] 00:31:25.700 bw ( KiB/s): min= 1776, max= 2208, per=4.22%, avg=1997.63, stdev=92.89, samples=19 00:31:25.700 iops : min= 444, max= 552, avg=499.37, stdev=23.26, samples=19 00:31:25.700 lat (msec) : 20=1.00%, 50=98.15%, 100=0.84% 00:31:25.700 cpu : usr=96.71%, sys=1.73%, ctx=142, majf=0, minf=82 00:31:25.700 IO depths : 1=3.4%, 2=9.1%, 4=23.3%, 8=54.9%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=93.8%, 8=0.7%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146179: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=494, BW=1976KiB/s (2023kB/s)(19.3MiB/10012msec) 00:31:25.700 slat (usec): min=8, max=132, avg=38.64, stdev=18.20 00:31:25.700 clat (usec): min=21963, max=76060, avg=32064.53, stdev=2593.75 00:31:25.700 lat (usec): min=21992, max=76082, avg=32103.17, stdev=2591.79 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[28181], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:31:25.700 | 99.00th=[43779], 99.50th=[54264], 99.90th=[61604], 99.95th=[61604], 00:31:25.700 | 99.99th=[76022] 00:31:25.700 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1972.15, stdev=83.10, samples=20 00:31:25.700 iops : min= 448, max= 512, avg=493.00, stdev=20.80, samples=20 00:31:25.700 lat (msec) : 50=99.35%, 100=0.65% 00:31:25.700 cpu : usr=98.49%, sys=1.10%, ctx=26, majf=0, minf=78 00:31:25.700 IO depths : 1=5.0%, 2=11.0%, 4=24.5%, 8=52.0%, 16=7.6%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146180: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10016msec) 00:31:25.700 slat (usec): min=5, max=151, avg=20.44, stdev=14.42 00:31:25.700 clat (usec): min=6938, max=52934, avg=31654.40, stdev=3342.72 00:31:25.700 lat (usec): min=6947, max=52942, avg=31674.84, stdev=3343.11 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[15401], 5.00th=[29492], 10.00th=[30802], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:31:25.700 | 99.00th=[42730], 99.50th=[45876], 99.90th=[49546], 99.95th=[50594], 00:31:25.700 | 99.99th=[52691] 00:31:25.700 bw ( KiB/s): min= 1916, max= 2228, per=4.25%, avg=2011.20, stdev=77.68, samples=20 00:31:25.700 iops : min= 479, max= 557, avg=502.80, stdev=19.42, samples=20 00:31:25.700 lat (msec) : 10=0.30%, 20=1.83%, 50=97.79%, 100=0.08% 00:31:25.700 cpu : usr=98.57%, sys=1.02%, ctx=17, majf=0, minf=82 00:31:25.700 IO depths : 1=2.4%, 2=8.3%, 4=23.5%, 8=55.6%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=94.0%, 8=0.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=5030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146181: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=496, BW=1986KiB/s (2034kB/s)(19.5MiB/10042msec) 00:31:25.700 slat (usec): min=5, max=165, avg=64.83, stdev=35.57 00:31:25.700 clat (usec): min=7692, max=58311, avg=31695.50, stdev=2749.55 00:31:25.700 lat (usec): min=7701, max=58414, avg=31760.33, stdev=2750.15 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[25035], 5.00th=[30016], 10.00th=[30540], 20.00th=[31065], 00:31:25.700 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:25.700 | 99.00th=[38011], 99.50th=[52691], 99.90th=[55837], 99.95th=[57934], 00:31:25.700 | 99.99th=[58459] 00:31:25.700 bw ( KiB/s): min= 1904, max= 2096, per=4.20%, avg=1988.00, stdev=66.86, samples=20 00:31:25.700 iops : min= 476, max= 524, avg=497.00, stdev=16.71, samples=20 00:31:25.700 lat (msec) : 10=0.22%, 20=0.60%, 50=98.62%, 100=0.56% 00:31:25.700 cpu : usr=98.48%, sys=1.07%, ctx=25, majf=0, minf=82 00:31:25.700 IO depths : 1=4.2%, 2=9.6%, 4=21.8%, 8=55.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146182: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=489, BW=1959KiB/s (2006kB/s)(19.2MiB/10031msec) 00:31:25.700 slat (usec): min=7, max=161, avg=37.02, stdev=25.69 00:31:25.700 clat (usec): min=9481, max=86784, avg=32395.68, stdev=4673.63 00:31:25.700 lat (usec): min=9513, max=86820, avg=32432.70, stdev=4670.74 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[25035], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[35914], 00:31:25.700 | 99.00th=[55837], 99.50th=[56886], 99.90th=[86508], 99.95th=[86508], 00:31:25.700 | 99.99th=[86508] 00:31:25.700 bw ( KiB/s): min= 1744, max= 2048, per=4.14%, avg=1960.21, stdev=96.00, samples=19 00:31:25.700 iops : min= 436, max= 512, avg=490.05, stdev=24.00, samples=19 00:31:25.700 lat (msec) : 10=0.12%, 20=0.67%, 50=97.21%, 100=2.00% 00:31:25.700 cpu : usr=96.40%, sys=1.88%, ctx=139, majf=0, minf=102 00:31:25.700 IO depths : 1=0.5%, 2=5.7%, 4=22.1%, 8=59.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146183: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=494, BW=1977KiB/s (2024kB/s)(19.3MiB/10004msec) 00:31:25.700 slat (usec): min=8, max=114, avg=37.04, stdev=12.55 00:31:25.700 clat (usec): min=12421, max=58242, avg=32029.43, stdev=2150.76 00:31:25.700 lat (usec): min=12432, max=58279, avg=32066.48, stdev=2150.73 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32900], 00:31:25.700 | 99.00th=[36963], 99.50th=[54264], 99.90th=[57934], 99.95th=[58459], 00:31:25.700 | 99.99th=[58459] 00:31:25.700 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1980.42, stdev=78.48, samples=19 00:31:25.700 iops : min= 448, max= 512, avg=495.11, stdev=19.62, samples=19 00:31:25.700 lat (msec) : 20=0.04%, 50=99.27%, 100=0.69% 00:31:25.700 cpu : usr=98.66%, sys=0.94%, ctx=20, majf=0, minf=81 00:31:25.700 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146184: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.5MiB/10045msec) 00:31:25.700 slat (nsec): min=7896, max=94563, avg=21703.86, stdev=13877.37 00:31:25.700 clat (usec): min=15694, max=57099, avg=32086.34, stdev=2877.10 00:31:25.700 lat (usec): min=15702, max=57120, avg=32108.04, stdev=2876.70 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[24511], 5.00th=[28181], 10.00th=[30802], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.700 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[36439], 00:31:25.700 | 99.00th=[42206], 99.50th=[47973], 99.90th=[56886], 99.95th=[56886], 00:31:25.700 | 99.99th=[56886] 00:31:25.700 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1986.40, stdev=59.76, samples=20 00:31:25.700 iops : min= 480, max= 512, avg=496.60, stdev=14.94, samples=20 00:31:25.700 lat (msec) : 20=0.44%, 50=99.08%, 100=0.48% 00:31:25.700 cpu : usr=98.50%, sys=1.12%, ctx=15, majf=0, minf=75 00:31:25.700 IO depths : 1=3.9%, 2=9.4%, 4=22.6%, 8=55.4%, 16=8.7%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename0: (groupid=0, jobs=1): err= 0: pid=2146185: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=495, BW=1983KiB/s (2030kB/s)(19.5MiB/10063msec) 00:31:25.700 slat (usec): min=3, max=110, avg=31.35, stdev=17.58 00:31:25.700 clat (usec): min=12142, max=69891, avg=31902.28, stdev=2456.85 00:31:25.700 lat (usec): min=12150, max=69911, avg=31933.63, stdev=2458.45 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[21890], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:31:25.700 | 99.00th=[38011], 99.50th=[43779], 99.90th=[46924], 99.95th=[46924], 00:31:25.700 | 99.99th=[69731] 00:31:25.700 bw ( KiB/s): min= 1904, max= 2048, per=4.20%, avg=1988.80, stdev=61.00, samples=20 00:31:25.700 iops : min= 476, max= 512, avg=497.20, stdev=15.25, samples=20 00:31:25.700 lat (msec) : 20=0.90%, 50=99.06%, 100=0.04% 00:31:25.700 cpu : usr=95.17%, sys=2.50%, ctx=117, majf=0, minf=47 00:31:25.700 IO depths : 1=1.9%, 2=7.5%, 4=23.3%, 8=56.5%, 16=10.8%, 32=0.0%, >=64=0.0% 00:31:25.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 complete : 0=0.0%, 4=94.0%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.700 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.700 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.700 filename1: (groupid=0, jobs=1): err= 0: pid=2146186: Sun Jul 14 03:19:18 2024 00:31:25.700 read: IOPS=493, BW=1972KiB/s (2020kB/s)(19.3MiB/10035msec) 00:31:25.700 slat (usec): min=7, max=137, avg=33.25, stdev=17.42 00:31:25.700 clat (usec): min=17327, max=67668, avg=32201.65, stdev=3543.10 00:31:25.700 lat (usec): min=17340, max=67693, avg=32234.90, stdev=3542.64 00:31:25.700 clat percentiles (usec): 00:31:25.700 | 1.00th=[21627], 5.00th=[30278], 10.00th=[31065], 20.00th=[31327], 00:31:25.700 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.700 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[35390], 00:31:25.700 | 99.00th=[48497], 99.50th=[58983], 99.90th=[67634], 99.95th=[67634], 00:31:25.700 | 99.99th=[67634] 00:31:25.700 bw ( KiB/s): min= 1795, max= 2080, per=4.17%, avg=1973.10, stdev=77.41, samples=20 00:31:25.700 iops : min= 448, max= 520, avg=493.20, stdev=19.47, samples=20 00:31:25.700 lat (msec) : 20=0.71%, 50=98.52%, 100=0.77% 00:31:25.701 cpu : usr=94.50%, sys=2.62%, ctx=215, majf=0, minf=98 00:31:25.701 IO depths : 1=1.0%, 2=5.1%, 4=17.7%, 8=63.3%, 16=12.9%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=92.7%, 8=2.9%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146187: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=495, BW=1980KiB/s (2028kB/s)(19.4MiB/10046msec) 00:31:25.701 slat (usec): min=7, max=136, avg=39.89, stdev=24.71 00:31:25.701 clat (usec): min=9482, max=58890, avg=32020.95, stdev=3870.69 00:31:25.701 lat (usec): min=9505, max=58928, avg=32060.84, stdev=3873.92 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[17695], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.701 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34341], 00:31:25.701 | 99.00th=[52691], 99.50th=[54789], 99.90th=[56886], 99.95th=[58983], 00:31:25.701 | 99.99th=[58983] 00:31:25.701 bw ( KiB/s): min= 1888, max= 2048, per=4.19%, avg=1983.20, stdev=59.24, samples=20 00:31:25.701 iops : min= 472, max= 512, avg=495.80, stdev=14.81, samples=20 00:31:25.701 lat (msec) : 10=0.22%, 20=1.13%, 50=97.23%, 100=1.43% 00:31:25.701 cpu : usr=95.45%, sys=2.41%, ctx=186, majf=0, minf=78 00:31:25.701 IO depths : 1=0.9%, 2=5.9%, 4=21.1%, 8=59.9%, 16=12.2%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=93.5%, 8=1.4%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146188: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.4MiB/10041msec) 00:31:25.701 slat (usec): min=6, max=588, avg=36.24, stdev=17.80 00:31:25.701 clat (usec): min=20181, max=67366, avg=32042.84, stdev=2077.38 00:31:25.701 lat (usec): min=20190, max=67387, avg=32079.08, stdev=2076.46 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[28967], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.701 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:31:25.701 | 99.00th=[38011], 99.50th=[44303], 99.90th=[67634], 99.95th=[67634], 00:31:25.701 | 99.99th=[67634] 00:31:25.701 bw ( KiB/s): min= 1920, max= 2048, per=4.18%, avg=1980.00, stdev=63.97, samples=20 00:31:25.701 iops : min= 480, max= 512, avg=495.00, stdev=15.99, samples=20 00:31:25.701 lat (msec) : 50=99.60%, 100=0.40% 00:31:25.701 cpu : usr=93.05%, sys=3.51%, ctx=95, majf=0, minf=84 00:31:25.701 IO depths : 1=4.9%, 2=10.5%, 4=24.2%, 8=52.7%, 16=7.7%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=94.1%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146189: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=501, BW=2004KiB/s (2052kB/s)(19.7MiB/10046msec) 00:31:25.701 slat (usec): min=4, max=136, avg=20.74, stdev=17.05 00:31:25.701 clat (usec): min=6985, max=57918, avg=31792.38, stdev=5018.59 00:31:25.701 lat (usec): min=6995, max=57966, avg=31813.11, stdev=5018.52 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[12518], 5.00th=[23200], 10.00th=[29754], 20.00th=[31065], 00:31:25.701 | 30.00th=[31589], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:25.701 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33424], 95.00th=[37487], 00:31:25.701 | 99.00th=[51119], 99.50th=[53740], 99.90th=[55837], 99.95th=[57934], 00:31:25.701 | 99.99th=[57934] 00:31:25.701 bw ( KiB/s): min= 1844, max= 2352, per=4.24%, avg=2007.00, stdev=118.00, samples=20 00:31:25.701 iops : min= 461, max= 588, avg=501.75, stdev=29.50, samples=20 00:31:25.701 lat (msec) : 10=0.64%, 20=2.38%, 50=95.77%, 100=1.21% 00:31:25.701 cpu : usr=98.35%, sys=1.22%, ctx=26, majf=0, minf=62 00:31:25.701 IO depths : 1=1.2%, 2=2.3%, 4=9.7%, 8=74.8%, 16=12.1%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=5034,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146190: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.1MiB/10068msec) 00:31:25.701 slat (usec): min=8, max=179, avg=51.97, stdev=36.57 00:31:25.701 clat (usec): min=7466, max=70448, avg=32483.98, stdev=5044.22 00:31:25.701 lat (usec): min=7502, max=70563, avg=32535.94, stdev=5048.52 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[16057], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.701 | 70.00th=[32375], 80.00th=[32637], 90.00th=[34866], 95.00th=[40109], 00:31:25.701 | 99.00th=[54264], 99.50th=[55313], 99.90th=[69731], 99.95th=[70779], 00:31:25.701 | 99.99th=[70779] 00:31:25.701 bw ( KiB/s): min= 1720, max= 2080, per=4.12%, avg=1951.80, stdev=98.34, samples=20 00:31:25.701 iops : min= 430, max= 520, avg=487.95, stdev=24.59, samples=20 00:31:25.701 lat (msec) : 10=0.29%, 20=1.35%, 50=96.34%, 100=2.02% 00:31:25.701 cpu : usr=98.53%, sys=1.02%, ctx=24, majf=0, minf=61 00:31:25.701 IO depths : 1=3.1%, 2=8.6%, 4=22.8%, 8=55.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4895,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146191: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.3MiB/10045msec) 00:31:25.701 slat (usec): min=8, max=164, avg=47.90, stdev=26.72 00:31:25.701 clat (usec): min=10130, max=64504, avg=32130.05, stdev=3351.45 00:31:25.701 lat (usec): min=10154, max=64520, avg=32177.95, stdev=3349.97 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[22152], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.701 | 70.00th=[32113], 80.00th=[32375], 90.00th=[33162], 95.00th=[34341], 00:31:25.701 | 99.00th=[47973], 99.50th=[54264], 99.90th=[56361], 99.95th=[56886], 00:31:25.701 | 99.99th=[64750] 00:31:25.701 bw ( KiB/s): min= 1808, max= 2048, per=4.15%, avg=1967.20, stdev=69.16, samples=20 00:31:25.701 iops : min= 452, max= 512, avg=491.80, stdev=17.29, samples=20 00:31:25.701 lat (msec) : 20=0.71%, 50=98.34%, 100=0.95% 00:31:25.701 cpu : usr=98.57%, sys=1.02%, ctx=23, majf=0, minf=78 00:31:25.701 IO depths : 1=1.9%, 2=7.9%, 4=24.1%, 8=55.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146192: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.2MiB/10037msec) 00:31:25.701 slat (usec): min=5, max=148, avg=34.07, stdev=25.39 00:31:25.701 clat (usec): min=8620, max=79081, avg=32448.12, stdev=5431.15 00:31:25.701 lat (usec): min=8654, max=79095, avg=32482.19, stdev=5431.03 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[10421], 5.00th=[29492], 10.00th=[30802], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.701 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33817], 95.00th=[38536], 00:31:25.701 | 99.00th=[54264], 99.50th=[64750], 99.90th=[79168], 99.95th=[79168], 00:31:25.701 | 99.99th=[79168] 00:31:25.701 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1957.90, stdev=77.85, samples=20 00:31:25.701 iops : min= 448, max= 512, avg=489.40, stdev=19.56, samples=20 00:31:25.701 lat (msec) : 10=0.37%, 20=1.43%, 50=95.82%, 100=2.38% 00:31:25.701 cpu : usr=97.31%, sys=1.58%, ctx=82, majf=0, minf=57 00:31:25.701 IO depths : 1=2.7%, 2=6.9%, 4=18.2%, 8=61.3%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=92.6%, 8=2.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename1: (groupid=0, jobs=1): err= 0: pid=2146193: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.3MiB/10007msec) 00:31:25.701 slat (usec): min=7, max=597, avg=29.80, stdev=20.40 00:31:25.701 clat (usec): min=8907, max=61636, avg=32099.96, stdev=4694.41 00:31:25.701 lat (usec): min=8924, max=61662, avg=32129.76, stdev=4693.42 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[13829], 5.00th=[30016], 10.00th=[30802], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.701 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[34866], 00:31:25.701 | 99.00th=[53740], 99.50th=[54789], 99.90th=[61604], 99.95th=[61604], 00:31:25.701 | 99.99th=[61604] 00:31:25.701 bw ( KiB/s): min= 1795, max= 2112, per=4.19%, avg=1983.95, stdev=77.30, samples=19 00:31:25.701 iops : min= 448, max= 528, avg=495.95, stdev=19.43, samples=19 00:31:25.701 lat (msec) : 10=0.02%, 20=2.08%, 50=95.84%, 100=2.06% 00:31:25.701 cpu : usr=96.31%, sys=1.99%, ctx=87, majf=0, minf=106 00:31:25.701 IO depths : 1=2.8%, 2=6.6%, 4=15.6%, 8=63.2%, 16=11.8%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=92.2%, 8=4.1%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename2: (groupid=0, jobs=1): err= 0: pid=2146194: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=496, BW=1987KiB/s (2035kB/s)(19.4MiB/10018msec) 00:31:25.701 slat (usec): min=8, max=721, avg=44.78, stdev=31.42 00:31:25.701 clat (usec): min=9683, max=54502, avg=31817.66, stdev=2101.38 00:31:25.701 lat (usec): min=9727, max=54527, avg=31862.45, stdev=2100.88 00:31:25.701 clat percentiles (usec): 00:31:25.701 | 1.00th=[25297], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:25.701 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.701 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:25.701 | 99.00th=[36439], 99.50th=[40633], 99.90th=[54264], 99.95th=[54264], 00:31:25.701 | 99.99th=[54264] 00:31:25.701 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1984.00, stdev=64.21, samples=20 00:31:25.701 iops : min= 480, max= 512, avg=496.00, stdev=16.05, samples=20 00:31:25.701 lat (msec) : 10=0.08%, 20=0.24%, 50=99.32%, 100=0.36% 00:31:25.701 cpu : usr=91.50%, sys=3.86%, ctx=166, majf=0, minf=77 00:31:25.701 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:31:25.701 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.701 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.701 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.701 filename2: (groupid=0, jobs=1): err= 0: pid=2146195: Sun Jul 14 03:19:18 2024 00:31:25.701 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.7MiB/10047msec) 00:31:25.702 slat (usec): min=7, max=1057, avg=35.36, stdev=24.10 00:31:25.702 clat (usec): min=6874, max=60255, avg=31632.18, stdev=3419.06 00:31:25.702 lat (usec): min=6899, max=60277, avg=31667.54, stdev=3419.84 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[10421], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.702 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:25.702 | 99.00th=[35914], 99.50th=[49021], 99.90th=[53740], 99.95th=[54789], 00:31:25.702 | 99.99th=[60031] 00:31:25.702 bw ( KiB/s): min= 1916, max= 2400, per=4.24%, avg=2007.80, stdev=110.72, samples=20 00:31:25.702 iops : min= 479, max= 600, avg=501.95, stdev=27.68, samples=20 00:31:25.702 lat (msec) : 10=0.97%, 20=0.66%, 50=97.90%, 100=0.48% 00:31:25.702 cpu : usr=87.91%, sys=5.15%, ctx=228, majf=0, minf=91 00:31:25.702 IO depths : 1=4.1%, 2=9.7%, 4=22.9%, 8=54.5%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=93.7%, 8=0.9%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=5036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146196: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.4MiB/10041msec) 00:31:25.702 slat (usec): min=5, max=755, avg=38.54, stdev=23.73 00:31:25.702 clat (usec): min=9704, max=75889, avg=32059.04, stdev=3303.00 00:31:25.702 lat (usec): min=9782, max=75904, avg=32097.59, stdev=3301.49 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[23725], 5.00th=[30278], 10.00th=[30802], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.702 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:25.702 | 99.00th=[48497], 99.50th=[54264], 99.90th=[63177], 99.95th=[64226], 00:31:25.702 | 99.99th=[76022] 00:31:25.702 bw ( KiB/s): min= 1792, max= 2064, per=4.17%, avg=1975.80, stdev=76.80, samples=20 00:31:25.702 iops : min= 448, max= 516, avg=493.95, stdev=19.20, samples=20 00:31:25.702 lat (msec) : 10=0.12%, 20=0.48%, 50=98.51%, 100=0.89% 00:31:25.702 cpu : usr=93.62%, sys=3.01%, ctx=358, majf=0, minf=93 00:31:25.702 IO depths : 1=2.1%, 2=6.8%, 4=19.1%, 8=60.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=93.1%, 8=2.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=4956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146197: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10033msec) 00:31:25.702 slat (usec): min=7, max=133, avg=33.99, stdev=15.45 00:31:25.702 clat (usec): min=16066, max=59616, avg=32156.72, stdev=2806.78 00:31:25.702 lat (usec): min=16074, max=59631, avg=32190.71, stdev=2805.20 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[27395], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.702 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33817], 00:31:25.702 | 99.00th=[46400], 99.50th=[56886], 99.90th=[59507], 99.95th=[59507], 00:31:25.702 | 99.99th=[59507] 00:31:25.702 bw ( KiB/s): min= 1779, max= 2048, per=4.17%, avg=1973.10, stdev=72.71, samples=20 00:31:25.702 iops : min= 444, max= 512, avg=493.20, stdev=18.31, samples=20 00:31:25.702 lat (msec) : 20=0.16%, 50=99.03%, 100=0.81% 00:31:25.702 cpu : usr=97.98%, sys=1.45%, ctx=93, majf=0, minf=81 00:31:25.702 IO depths : 1=1.8%, 2=7.6%, 4=23.5%, 8=56.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=94.0%, 8=0.6%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=4948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146198: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.5MiB/10006msec) 00:31:25.702 slat (nsec): min=7882, max=89059, avg=22624.02, stdev=15174.72 00:31:25.702 clat (usec): min=6455, max=60887, avg=31870.01, stdev=5532.32 00:31:25.702 lat (usec): min=6463, max=60901, avg=31892.63, stdev=5533.69 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[16319], 5.00th=[21890], 10.00th=[28181], 20.00th=[30802], 00:31:25.702 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.702 | 70.00th=[32375], 80.00th=[32637], 90.00th=[34341], 95.00th=[39584], 00:31:25.702 | 99.00th=[54264], 99.50th=[58459], 99.90th=[61080], 99.95th=[61080], 00:31:25.702 | 99.99th=[61080] 00:31:25.702 bw ( KiB/s): min= 1792, max= 2192, per=4.23%, avg=2003.58, stdev=106.07, samples=19 00:31:25.702 iops : min= 448, max= 548, avg=500.89, stdev=26.52, samples=19 00:31:25.702 lat (msec) : 10=0.30%, 20=3.38%, 50=94.68%, 100=1.64% 00:31:25.702 cpu : usr=98.40%, sys=1.21%, ctx=16, majf=0, minf=78 00:31:25.702 IO depths : 1=1.6%, 2=4.7%, 4=14.0%, 8=67.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=91.6%, 8=4.4%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=4998,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146199: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=495, BW=1981KiB/s (2029kB/s)(19.4MiB/10047msec) 00:31:25.702 slat (usec): min=7, max=141, avg=34.73, stdev=20.31 00:31:25.702 clat (usec): min=22874, max=57733, avg=32004.62, stdev=2013.97 00:31:25.702 lat (usec): min=22886, max=57755, avg=32039.35, stdev=2012.28 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[27395], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[32113], 00:31:25.702 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33424], 00:31:25.702 | 99.00th=[38011], 99.50th=[42206], 99.90th=[56886], 99.95th=[56886], 00:31:25.702 | 99.99th=[57934] 00:31:25.702 bw ( KiB/s): min= 1920, max= 2048, per=4.19%, avg=1984.00, stdev=65.66, samples=20 00:31:25.702 iops : min= 480, max= 512, avg=496.00, stdev=16.42, samples=20 00:31:25.702 lat (msec) : 50=99.62%, 100=0.38% 00:31:25.702 cpu : usr=96.01%, sys=1.99%, ctx=116, majf=0, minf=85 00:31:25.702 IO depths : 1=5.4%, 2=11.5%, 4=24.3%, 8=51.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=4976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146200: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10040msec) 00:31:25.702 slat (usec): min=7, max=148, avg=35.47, stdev=18.06 00:31:25.702 clat (usec): min=8045, max=58417, avg=31643.67, stdev=2990.63 00:31:25.702 lat (usec): min=8065, max=58426, avg=31679.14, stdev=2992.28 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[13698], 5.00th=[30540], 10.00th=[31065], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:25.702 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[33162], 00:31:25.702 | 99.00th=[37487], 99.50th=[38011], 99.90th=[53216], 99.95th=[54264], 00:31:25.702 | 99.99th=[58459] 00:31:25.702 bw ( KiB/s): min= 1920, max= 2180, per=4.22%, avg=2001.00, stdev=82.12, samples=20 00:31:25.702 iops : min= 480, max= 545, avg=500.25, stdev=20.53, samples=20 00:31:25.702 lat (msec) : 10=0.74%, 20=0.76%, 50=98.31%, 100=0.20% 00:31:25.702 cpu : usr=91.81%, sys=3.49%, ctx=187, majf=0, minf=76 00:31:25.702 IO depths : 1=4.5%, 2=10.6%, 4=24.8%, 8=52.1%, 16=8.0%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=5018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 filename2: (groupid=0, jobs=1): err= 0: pid=2146201: Sun Jul 14 03:19:18 2024 00:31:25.702 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.3MiB/10031msec) 00:31:25.702 slat (usec): min=7, max=132, avg=31.17, stdev=15.00 00:31:25.702 clat (usec): min=7225, max=67260, avg=32246.90, stdev=5115.90 00:31:25.702 lat (usec): min=7254, max=67275, avg=32278.07, stdev=5114.62 00:31:25.702 clat percentiles (usec): 00:31:25.702 | 1.00th=[16909], 5.00th=[28181], 10.00th=[30802], 20.00th=[31327], 00:31:25.702 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[32113], 00:31:25.702 | 70.00th=[32375], 80.00th=[32637], 90.00th=[33162], 95.00th=[36439], 00:31:25.702 | 99.00th=[55837], 99.50th=[62129], 99.90th=[67634], 99.95th=[67634], 00:31:25.702 | 99.99th=[67634] 00:31:25.702 bw ( KiB/s): min= 1792, max= 2096, per=4.16%, avg=1969.40, stdev=75.40, samples=20 00:31:25.702 iops : min= 448, max= 524, avg=492.35, stdev=18.85, samples=20 00:31:25.702 lat (msec) : 10=0.85%, 20=0.97%, 50=96.31%, 100=1.86% 00:31:25.702 cpu : usr=98.55%, sys=1.03%, ctx=15, majf=0, minf=53 00:31:25.702 IO depths : 1=2.5%, 2=7.5%, 4=20.9%, 8=58.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:25.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 complete : 0=0.0%, 4=93.3%, 8=1.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:25.702 issued rwts: total=4938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:25.702 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:25.702 00:31:25.702 Run status group 0 (all jobs): 00:31:25.702 READ: bw=46.3MiB/s (48.5MB/s), 1945KiB/s-2009KiB/s (1991kB/s-2057kB/s), io=466MiB (488MB), run=10004-10068msec 00:31:25.702 03:19:19 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:25.702 03:19:19 -- target/dif.sh@43 -- # local sub 00:31:25.702 03:19:19 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.702 03:19:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:25.702 03:19:19 -- target/dif.sh@36 -- # local sub_id=0 00:31:25.702 03:19:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:25.702 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.702 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.702 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.702 03:19:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:25.702 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.702 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.702 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.702 03:19:19 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.702 03:19:19 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:25.702 03:19:19 -- target/dif.sh@36 -- # local sub_id=1 00:31:25.702 03:19:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:25.702 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.702 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.702 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@45 -- # for sub in "$@" 00:31:25.703 03:19:19 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:25.703 03:19:19 -- target/dif.sh@36 -- # local sub_id=2 00:31:25.703 03:19:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # numjobs=2 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # iodepth=8 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # runtime=5 00:31:25.703 03:19:19 -- target/dif.sh@115 -- # files=1 00:31:25.703 03:19:19 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:25.703 03:19:19 -- target/dif.sh@28 -- # local sub 00:31:25.703 03:19:19 -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.703 03:19:19 -- target/dif.sh@31 -- # create_subsystem 0 00:31:25.703 03:19:19 -- target/dif.sh@18 -- # local sub_id=0 00:31:25.703 03:19:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 bdev_null0 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 [2024-07-14 03:19:19.224981] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@30 -- # for sub in "$@" 00:31:25.703 03:19:19 -- target/dif.sh@31 -- # create_subsystem 1 00:31:25.703 03:19:19 -- target/dif.sh@18 -- # local sub_id=1 00:31:25.703 03:19:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 bdev_null1 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:25.703 03:19:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:25.703 03:19:19 -- common/autotest_common.sh@10 -- # set +x 00:31:25.703 03:19:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:25.703 03:19:19 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:25.703 03:19:19 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:25.703 03:19:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:25.703 03:19:19 -- nvmf/common.sh@520 -- # config=() 00:31:25.703 03:19:19 -- nvmf/common.sh@520 -- # local subsystem config 00:31:25.703 03:19:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:25.703 03:19:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.703 03:19:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:25.703 { 00:31:25.703 "params": { 00:31:25.703 "name": "Nvme$subsystem", 00:31:25.703 "trtype": "$TEST_TRANSPORT", 00:31:25.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.703 "adrfam": "ipv4", 00:31:25.703 "trsvcid": "$NVMF_PORT", 00:31:25.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.703 "hdgst": ${hdgst:-false}, 00:31:25.703 "ddgst": ${ddgst:-false} 00:31:25.703 }, 00:31:25.703 "method": "bdev_nvme_attach_controller" 00:31:25.703 } 00:31:25.703 EOF 00:31:25.703 )") 00:31:25.703 03:19:19 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.703 03:19:19 -- target/dif.sh@82 -- # gen_fio_conf 00:31:25.703 03:19:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:25.703 03:19:19 -- target/dif.sh@54 -- # local file 00:31:25.703 03:19:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:25.703 03:19:19 -- target/dif.sh@56 -- # cat 00:31:25.703 03:19:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:25.703 03:19:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.703 03:19:19 -- common/autotest_common.sh@1320 -- # shift 00:31:25.703 03:19:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:25.703 03:19:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.703 03:19:19 -- nvmf/common.sh@542 -- # cat 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.703 03:19:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:25.703 03:19:19 -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.703 03:19:19 -- target/dif.sh@73 -- # cat 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:25.703 03:19:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:25.703 03:19:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:25.703 { 00:31:25.703 "params": { 00:31:25.703 "name": "Nvme$subsystem", 00:31:25.703 "trtype": "$TEST_TRANSPORT", 00:31:25.703 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:25.703 "adrfam": "ipv4", 00:31:25.703 "trsvcid": "$NVMF_PORT", 00:31:25.703 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:25.703 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:25.703 "hdgst": ${hdgst:-false}, 00:31:25.703 "ddgst": ${ddgst:-false} 00:31:25.703 }, 00:31:25.703 "method": "bdev_nvme_attach_controller" 00:31:25.703 } 00:31:25.703 EOF 00:31:25.703 )") 00:31:25.703 03:19:19 -- nvmf/common.sh@542 -- # cat 00:31:25.703 03:19:19 -- target/dif.sh@72 -- # (( file++ )) 00:31:25.703 03:19:19 -- target/dif.sh@72 -- # (( file <= files )) 00:31:25.703 03:19:19 -- nvmf/common.sh@544 -- # jq . 00:31:25.703 03:19:19 -- nvmf/common.sh@545 -- # IFS=, 00:31:25.703 03:19:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:25.703 "params": { 00:31:25.703 "name": "Nvme0", 00:31:25.703 "trtype": "tcp", 00:31:25.703 "traddr": "10.0.0.2", 00:31:25.703 "adrfam": "ipv4", 00:31:25.703 "trsvcid": "4420", 00:31:25.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:25.703 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:25.703 "hdgst": false, 00:31:25.703 "ddgst": false 00:31:25.703 }, 00:31:25.703 "method": "bdev_nvme_attach_controller" 00:31:25.703 },{ 00:31:25.703 "params": { 00:31:25.703 "name": "Nvme1", 00:31:25.703 "trtype": "tcp", 00:31:25.703 "traddr": "10.0.0.2", 00:31:25.703 "adrfam": "ipv4", 00:31:25.703 "trsvcid": "4420", 00:31:25.703 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:25.703 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:25.703 "hdgst": false, 00:31:25.703 "ddgst": false 00:31:25.703 }, 00:31:25.703 "method": "bdev_nvme_attach_controller" 00:31:25.703 }' 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:25.703 03:19:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:25.703 03:19:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:25.703 03:19:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:25.703 03:19:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:25.703 03:19:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:25.703 03:19:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.703 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:25.703 ... 00:31:25.703 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:25.703 ... 00:31:25.703 fio-3.35 00:31:25.703 Starting 4 threads 00:31:25.703 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.703 [2024-07-14 03:19:20.241299] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.703 [2024-07-14 03:19:20.241371] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:30.971 00:31:30.971 filename0: (groupid=0, jobs=1): err= 0: pid=2147503: Sun Jul 14 03:19:25 2024 00:31:30.971 read: IOPS=1864, BW=14.6MiB/s (15.3MB/s)(72.9MiB/5002msec) 00:31:30.971 slat (nsec): min=3844, max=33931, avg=11616.93, stdev=3628.19 00:31:30.971 clat (usec): min=1453, max=46492, avg=4255.00, stdev=1377.99 00:31:30.971 lat (usec): min=1461, max=46509, avg=4266.61, stdev=1377.88 00:31:30.971 clat percentiles (usec): 00:31:30.971 | 1.00th=[ 3130], 5.00th=[ 3523], 10.00th=[ 3654], 20.00th=[ 3818], 00:31:30.971 | 30.00th=[ 3949], 40.00th=[ 4047], 50.00th=[ 4146], 60.00th=[ 4228], 00:31:30.971 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5080], 95.00th=[ 5669], 00:31:30.971 | 99.00th=[ 6456], 99.50th=[ 6587], 99.90th=[ 8455], 99.95th=[46400], 00:31:30.971 | 99.99th=[46400] 00:31:30.971 bw ( KiB/s): min=13920, max=15600, per=24.69%, avg=14848.00, stdev=600.53, samples=9 00:31:30.971 iops : min= 1740, max= 1950, avg=1856.00, stdev=75.07, samples=9 00:31:30.971 lat (msec) : 2=0.03%, 4=34.29%, 10=65.59%, 50=0.09% 00:31:30.971 cpu : usr=94.00%, sys=5.48%, ctx=14, majf=0, minf=0 00:31:30.971 IO depths : 1=0.6%, 2=1.9%, 4=68.8%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.971 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.971 issued rwts: total=9328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.971 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.971 filename0: (groupid=0, jobs=1): err= 0: pid=2147504: Sun Jul 14 03:19:25 2024 00:31:30.971 read: IOPS=1924, BW=15.0MiB/s (15.8MB/s)(75.2MiB/5002msec) 00:31:30.971 slat (nsec): min=4042, max=33301, avg=13496.49, stdev=3742.89 00:31:30.971 clat (usec): min=1733, max=8550, avg=4112.88, stdev=628.67 00:31:30.971 lat (usec): min=1745, max=8561, avg=4126.37, stdev=628.53 00:31:30.971 clat percentiles (usec): 00:31:30.971 | 1.00th=[ 2704], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3752], 00:31:30.971 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 4047], 60.00th=[ 4146], 00:31:30.971 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4817], 95.00th=[ 5604], 00:31:30.971 | 99.00th=[ 6194], 99.50th=[ 6259], 99.90th=[ 7046], 99.95th=[ 8356], 00:31:30.971 | 99.99th=[ 8586] 00:31:30.971 bw ( KiB/s): min=14544, max=16688, per=25.76%, avg=15491.56, stdev=592.47, samples=9 00:31:30.971 iops : min= 1818, max= 2086, avg=1936.44, stdev=74.06, samples=9 00:31:30.971 lat (msec) : 2=0.21%, 4=44.08%, 10=55.71% 00:31:30.972 cpu : usr=93.76%, sys=5.70%, ctx=7, majf=0, minf=0 00:31:30.972 IO depths : 1=0.1%, 2=7.2%, 4=65.2%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 issued rwts: total=9627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.972 filename1: (groupid=0, jobs=1): err= 0: pid=2147505: Sun Jul 14 03:19:25 2024 00:31:30.972 read: IOPS=1891, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5003msec) 00:31:30.972 slat (nsec): min=3787, max=39064, avg=12514.65, stdev=4629.19 00:31:30.972 clat (usec): min=1474, max=9118, avg=4189.43, stdev=761.32 00:31:30.972 lat (usec): min=1483, max=9143, avg=4201.94, stdev=761.19 00:31:30.972 clat percentiles (usec): 00:31:30.972 | 1.00th=[ 2507], 5.00th=[ 3392], 10.00th=[ 3556], 20.00th=[ 3752], 00:31:30.972 | 30.00th=[ 3851], 40.00th=[ 3982], 50.00th=[ 4113], 60.00th=[ 4146], 00:31:30.972 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 5407], 95.00th=[ 6063], 00:31:30.972 | 99.00th=[ 6718], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 8979], 00:31:30.972 | 99.99th=[ 9110] 00:31:30.972 bw ( KiB/s): min=14240, max=15600, per=25.28%, avg=15200.00, stdev=417.54, samples=9 00:31:30.972 iops : min= 1780, max= 1950, avg=1900.00, stdev=52.19, samples=9 00:31:30.972 lat (msec) : 2=0.24%, 4=40.45%, 10=59.30% 00:31:30.972 cpu : usr=90.56%, sys=7.12%, ctx=237, majf=0, minf=9 00:31:30.972 IO depths : 1=0.1%, 2=4.7%, 4=68.3%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 issued rwts: total=9463,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.972 filename1: (groupid=0, jobs=1): err= 0: pid=2147506: Sun Jul 14 03:19:25 2024 00:31:30.972 read: IOPS=1836, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5002msec) 00:31:30.972 slat (nsec): min=3745, max=44052, avg=14946.95, stdev=4755.69 00:31:30.972 clat (usec): min=2361, max=48901, avg=4312.51, stdev=1396.42 00:31:30.972 lat (usec): min=2375, max=48913, avg=4327.46, stdev=1396.29 00:31:30.972 clat percentiles (usec): 00:31:30.972 | 1.00th=[ 3163], 5.00th=[ 3621], 10.00th=[ 3818], 20.00th=[ 3982], 00:31:30.972 | 30.00th=[ 4047], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:31:30.972 | 70.00th=[ 4359], 80.00th=[ 4621], 90.00th=[ 4948], 95.00th=[ 5145], 00:31:30.972 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6521], 99.95th=[49021], 00:31:30.972 | 99.99th=[49021] 00:31:30.972 bw ( KiB/s): min=13616, max=15216, per=24.27%, avg=14595.56, stdev=648.83, samples=9 00:31:30.972 iops : min= 1702, max= 1902, avg=1824.44, stdev=81.10, samples=9 00:31:30.972 lat (msec) : 4=25.14%, 10=74.77%, 50=0.09% 00:31:30.972 cpu : usr=88.68%, sys=8.48%, ctx=224, majf=0, minf=0 00:31:30.972 IO depths : 1=0.1%, 2=1.5%, 4=69.5%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.972 issued rwts: total=9185,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.972 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:30.972 00:31:30.972 Run status group 0 (all jobs): 00:31:30.972 READ: bw=58.7MiB/s (61.6MB/s), 14.3MiB/s-15.0MiB/s (15.0MB/s-15.8MB/s), io=294MiB (308MB), run=5002-5003msec 00:31:30.972 03:19:25 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:30.972 03:19:25 -- target/dif.sh@43 -- # local sub 00:31:30.972 03:19:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.972 03:19:25 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.972 03:19:25 -- target/dif.sh@36 -- # local sub_id=0 00:31:30.972 03:19:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.972 03:19:25 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:30.972 03:19:25 -- target/dif.sh@36 -- # local sub_id=1 00:31:30.972 03:19:25 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 00:31:30.972 real 0m24.184s 00:31:30.972 user 4m28.106s 00:31:30.972 sys 0m8.116s 00:31:30.972 03:19:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 ************************************ 00:31:30.972 END TEST fio_dif_rand_params 00:31:30.972 ************************************ 00:31:30.972 03:19:25 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:30.972 03:19:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:30.972 03:19:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 ************************************ 00:31:30.972 START TEST fio_dif_digest 00:31:30.972 ************************************ 00:31:30.972 03:19:25 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:30.972 03:19:25 -- target/dif.sh@123 -- # local NULL_DIF 00:31:30.972 03:19:25 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:30.972 03:19:25 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:30.972 03:19:25 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:30.972 03:19:25 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:30.972 03:19:25 -- target/dif.sh@127 -- # numjobs=3 00:31:30.972 03:19:25 -- target/dif.sh@127 -- # iodepth=3 00:31:30.972 03:19:25 -- target/dif.sh@127 -- # runtime=10 00:31:30.972 03:19:25 -- target/dif.sh@128 -- # hdgst=true 00:31:30.972 03:19:25 -- target/dif.sh@128 -- # ddgst=true 00:31:30.972 03:19:25 -- target/dif.sh@130 -- # create_subsystems 0 00:31:30.972 03:19:25 -- target/dif.sh@28 -- # local sub 00:31:30.972 03:19:25 -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.972 03:19:25 -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.972 03:19:25 -- target/dif.sh@18 -- # local sub_id=0 00:31:30.972 03:19:25 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 bdev_null0 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.972 03:19:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:30.972 03:19:25 -- common/autotest_common.sh@10 -- # set +x 00:31:30.972 [2024-07-14 03:19:25.702113] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.972 03:19:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:30.972 03:19:25 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:30.972 03:19:25 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:30.972 03:19:25 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:30.972 03:19:25 -- nvmf/common.sh@520 -- # config=() 00:31:30.972 03:19:25 -- nvmf/common.sh@520 -- # local subsystem config 00:31:30.972 03:19:25 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:30.972 03:19:25 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.972 03:19:25 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:30.972 { 00:31:30.972 "params": { 00:31:30.972 "name": "Nvme$subsystem", 00:31:30.972 "trtype": "$TEST_TRANSPORT", 00:31:30.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.972 "adrfam": "ipv4", 00:31:30.972 "trsvcid": "$NVMF_PORT", 00:31:30.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.972 "hdgst": ${hdgst:-false}, 00:31:30.972 "ddgst": ${ddgst:-false} 00:31:30.972 }, 00:31:30.972 "method": "bdev_nvme_attach_controller" 00:31:30.972 } 00:31:30.972 EOF 00:31:30.972 )") 00:31:30.972 03:19:25 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.972 03:19:25 -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.972 03:19:25 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:30.972 03:19:25 -- target/dif.sh@54 -- # local file 00:31:30.972 03:19:25 -- target/dif.sh@56 -- # cat 00:31:30.972 03:19:25 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.972 03:19:25 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:30.972 03:19:25 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.972 03:19:25 -- common/autotest_common.sh@1320 -- # shift 00:31:30.972 03:19:25 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:30.972 03:19:25 -- nvmf/common.sh@542 -- # cat 00:31:30.972 03:19:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.972 03:19:25 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.972 03:19:25 -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.972 03:19:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.972 03:19:25 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:30.972 03:19:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:30.972 03:19:25 -- nvmf/common.sh@544 -- # jq . 00:31:30.972 03:19:25 -- nvmf/common.sh@545 -- # IFS=, 00:31:30.972 03:19:25 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:30.972 "params": { 00:31:30.972 "name": "Nvme0", 00:31:30.972 "trtype": "tcp", 00:31:30.972 "traddr": "10.0.0.2", 00:31:30.972 "adrfam": "ipv4", 00:31:30.972 "trsvcid": "4420", 00:31:30.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.973 "hdgst": true, 00:31:30.973 "ddgst": true 00:31:30.973 }, 00:31:30.973 "method": "bdev_nvme_attach_controller" 00:31:30.973 }' 00:31:30.973 03:19:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:30.973 03:19:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:30.973 03:19:25 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.973 03:19:25 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.973 03:19:25 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:30.973 03:19:25 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:30.973 03:19:25 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:30.973 03:19:25 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:30.973 03:19:25 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.973 03:19:25 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.973 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:30.973 ... 00:31:30.973 fio-3.35 00:31:30.973 Starting 3 threads 00:31:30.973 EAL: No free 2048 kB hugepages reported on node 1 00:31:31.229 [2024-07-14 03:19:26.373494] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:31.229 [2024-07-14 03:19:26.373558] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:43.429 00:31:43.429 filename0: (groupid=0, jobs=1): err= 0: pid=2148394: Sun Jul 14 03:19:36 2024 00:31:43.429 read: IOPS=201, BW=25.2MiB/s (26.5MB/s)(254MiB/10048msec) 00:31:43.429 slat (nsec): min=4406, max=45688, avg=16387.46, stdev=4621.86 00:31:43.429 clat (usec): min=8419, max=59785, avg=14793.81, stdev=8737.61 00:31:43.429 lat (usec): min=8431, max=59802, avg=14810.20, stdev=8737.83 00:31:43.429 clat percentiles (usec): 00:31:43.429 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10814], 00:31:43.429 | 30.00th=[11994], 40.00th=[12780], 50.00th=[13435], 60.00th=[13829], 00:31:43.429 | 70.00th=[14353], 80.00th=[15008], 90.00th=[15926], 95.00th=[17695], 00:31:43.429 | 99.00th=[55837], 99.50th=[56886], 99.90th=[57934], 99.95th=[58459], 00:31:43.429 | 99.99th=[60031] 00:31:43.429 bw ( KiB/s): min=21760, max=30781, per=34.81%, avg=25935.85, stdev=2618.06, samples=20 00:31:43.429 iops : min= 170, max= 240, avg=202.60, stdev=20.41, samples=20 00:31:43.429 lat (msec) : 10=8.43%, 20=87.09%, 50=0.25%, 100=4.24% 00:31:43.429 cpu : usr=94.08%, sys=5.42%, ctx=17, majf=0, minf=158 00:31:43.429 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 issued rwts: total=2029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.429 filename0: (groupid=0, jobs=1): err= 0: pid=2148395: Sun Jul 14 03:19:36 2024 00:31:43.429 read: IOPS=141, BW=17.7MiB/s (18.6MB/s)(178MiB/10050msec) 00:31:43.429 slat (nsec): min=3742, max=66080, avg=17325.68, stdev=4980.13 00:31:43.429 clat (usec): min=6702, max=99045, avg=21119.06, stdev=14456.30 00:31:43.429 lat (usec): min=6715, max=99062, avg=21136.39, stdev=14456.51 00:31:43.429 clat percentiles (usec): 00:31:43.429 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[13829], 00:31:43.429 | 30.00th=[15664], 40.00th=[16450], 50.00th=[17171], 60.00th=[17957], 00:31:43.429 | 70.00th=[18482], 80.00th=[19530], 90.00th=[55837], 95.00th=[57934], 00:31:43.429 | 99.00th=[60556], 99.50th=[61604], 99.90th=[99091], 99.95th=[99091], 00:31:43.429 | 99.99th=[99091] 00:31:43.429 bw ( KiB/s): min=14336, max=22016, per=24.41%, avg=18188.80, stdev=2163.44, samples=20 00:31:43.429 iops : min= 112, max= 172, avg=142.10, stdev=16.90, samples=20 00:31:43.429 lat (msec) : 10=7.37%, 20=75.42%, 50=4.99%, 100=12.22% 00:31:43.429 cpu : usr=93.84%, sys=5.21%, ctx=515, majf=0, minf=161 00:31:43.429 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.429 filename0: (groupid=0, jobs=1): err= 0: pid=2148396: Sun Jul 14 03:19:36 2024 00:31:43.429 read: IOPS=238, BW=29.8MiB/s (31.3MB/s)(300MiB/10047msec) 00:31:43.429 slat (usec): min=4, max=1057, avg=20.10, stdev=38.39 00:31:43.429 clat (usec): min=6091, max=94575, avg=12529.83, stdev=4056.36 00:31:43.429 lat (usec): min=6105, max=94593, avg=12549.94, stdev=4056.59 00:31:43.429 clat percentiles (usec): 00:31:43.429 | 1.00th=[ 6980], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:31:43.429 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12518], 60.00th=[13173], 00:31:43.429 | 70.00th=[13566], 80.00th=[14091], 90.00th=[14746], 95.00th=[15270], 00:31:43.429 | 99.00th=[16319], 99.50th=[49546], 99.90th=[56361], 99.95th=[93848], 00:31:43.429 | 99.99th=[94897] 00:31:43.429 bw ( KiB/s): min=24064, max=34816, per=41.14%, avg=30656.00, stdev=2274.43, samples=20 00:31:43.429 iops : min= 188, max= 272, avg=239.50, stdev=17.77, samples=20 00:31:43.429 lat (msec) : 10=13.10%, 20=86.27%, 50=0.21%, 100=0.42% 00:31:43.429 cpu : usr=78.40%, sys=10.72%, ctx=138, majf=0, minf=142 00:31:43.429 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:43.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.429 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.429 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:43.429 00:31:43.429 Run status group 0 (all jobs): 00:31:43.429 READ: bw=72.8MiB/s (76.3MB/s), 17.7MiB/s-29.8MiB/s (18.6MB/s-31.3MB/s), io=731MiB (767MB), run=10047-10050msec 00:31:43.429 03:19:36 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:43.429 03:19:36 -- target/dif.sh@43 -- # local sub 00:31:43.429 03:19:36 -- target/dif.sh@45 -- # for sub in "$@" 00:31:43.429 03:19:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:43.429 03:19:36 -- target/dif.sh@36 -- # local sub_id=0 00:31:43.429 03:19:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:43.429 03:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.429 03:19:36 -- common/autotest_common.sh@10 -- # set +x 00:31:43.429 03:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.429 03:19:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:43.429 03:19:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:43.429 03:19:36 -- common/autotest_common.sh@10 -- # set +x 00:31:43.429 03:19:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:43.429 00:31:43.429 real 0m11.111s 00:31:43.429 user 0m27.775s 00:31:43.429 sys 0m2.452s 00:31:43.429 03:19:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:43.429 03:19:36 -- common/autotest_common.sh@10 -- # set +x 00:31:43.429 ************************************ 00:31:43.429 END TEST fio_dif_digest 00:31:43.429 ************************************ 00:31:43.429 03:19:36 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:43.429 03:19:36 -- target/dif.sh@147 -- # nvmftestfini 00:31:43.429 03:19:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:43.429 03:19:36 -- nvmf/common.sh@116 -- # sync 00:31:43.429 03:19:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:43.429 03:19:36 -- nvmf/common.sh@119 -- # set +e 00:31:43.429 03:19:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:43.429 03:19:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:43.429 rmmod nvme_tcp 00:31:43.429 rmmod nvme_fabrics 00:31:43.429 rmmod nvme_keyring 00:31:43.429 03:19:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:43.429 03:19:36 -- nvmf/common.sh@123 -- # set -e 00:31:43.429 03:19:36 -- nvmf/common.sh@124 -- # return 0 00:31:43.429 03:19:36 -- nvmf/common.sh@477 -- # '[' -n 2142167 ']' 00:31:43.429 03:19:36 -- nvmf/common.sh@478 -- # killprocess 2142167 00:31:43.429 03:19:36 -- common/autotest_common.sh@926 -- # '[' -z 2142167 ']' 00:31:43.429 03:19:36 -- common/autotest_common.sh@930 -- # kill -0 2142167 00:31:43.429 03:19:36 -- common/autotest_common.sh@931 -- # uname 00:31:43.429 03:19:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:43.430 03:19:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2142167 00:31:43.430 03:19:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:43.430 03:19:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:43.430 03:19:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2142167' 00:31:43.430 killing process with pid 2142167 00:31:43.430 03:19:36 -- common/autotest_common.sh@945 -- # kill 2142167 00:31:43.430 03:19:36 -- common/autotest_common.sh@950 -- # wait 2142167 00:31:43.430 03:19:37 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:43.430 03:19:37 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:43.430 Waiting for block devices as requested 00:31:43.430 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:43.430 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.430 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.430 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.430 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.690 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:43.690 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:43.690 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:43.690 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:43.690 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:43.980 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:43.980 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:43.980 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:43.980 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:44.238 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:44.238 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:44.238 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:44.238 03:19:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:44.238 03:19:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:44.238 03:19:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:44.239 03:19:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:44.239 03:19:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.239 03:19:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:44.239 03:19:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.775 03:19:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:46.775 00:31:46.775 real 1m7.072s 00:31:46.775 user 6m23.423s 00:31:46.775 sys 0m20.103s 00:31:46.775 03:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:46.775 03:19:41 -- common/autotest_common.sh@10 -- # set +x 00:31:46.775 ************************************ 00:31:46.775 END TEST nvmf_dif 00:31:46.775 ************************************ 00:31:46.775 03:19:41 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.775 03:19:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:46.775 03:19:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:46.775 03:19:41 -- common/autotest_common.sh@10 -- # set +x 00:31:46.775 ************************************ 00:31:46.775 START TEST nvmf_abort_qd_sizes 00:31:46.775 ************************************ 00:31:46.775 03:19:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:46.775 * Looking for test storage... 00:31:46.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:46.775 03:19:41 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:46.775 03:19:41 -- nvmf/common.sh@7 -- # uname -s 00:31:46.775 03:19:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:46.775 03:19:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:46.775 03:19:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:46.775 03:19:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:46.775 03:19:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:46.775 03:19:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:46.775 03:19:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:46.775 03:19:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:46.775 03:19:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:46.775 03:19:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:46.775 03:19:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.775 03:19:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:46.775 03:19:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:46.775 03:19:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:46.775 03:19:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:46.775 03:19:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:46.775 03:19:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:46.775 03:19:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:46.775 03:19:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:46.775 03:19:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.775 03:19:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.775 03:19:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.775 03:19:41 -- paths/export.sh@5 -- # export PATH 00:31:46.775 03:19:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:46.775 03:19:41 -- nvmf/common.sh@46 -- # : 0 00:31:46.775 03:19:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:46.775 03:19:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:46.775 03:19:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:46.775 03:19:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:46.775 03:19:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:46.775 03:19:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:46.775 03:19:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:46.775 03:19:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:46.775 03:19:41 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:46.775 03:19:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:46.775 03:19:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:46.775 03:19:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:46.775 03:19:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:46.775 03:19:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:46.775 03:19:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:46.775 03:19:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:46.775 03:19:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:46.775 03:19:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:46.775 03:19:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:46.775 03:19:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:46.775 03:19:41 -- common/autotest_common.sh@10 -- # set +x 00:31:48.191 03:19:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:48.191 03:19:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:48.191 03:19:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:48.191 03:19:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:48.191 03:19:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:48.191 03:19:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:48.191 03:19:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:48.191 03:19:43 -- nvmf/common.sh@294 -- # net_devs=() 00:31:48.191 03:19:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:48.191 03:19:43 -- nvmf/common.sh@295 -- # e810=() 00:31:48.191 03:19:43 -- nvmf/common.sh@295 -- # local -ga e810 00:31:48.191 03:19:43 -- nvmf/common.sh@296 -- # x722=() 00:31:48.191 03:19:43 -- nvmf/common.sh@296 -- # local -ga x722 00:31:48.191 03:19:43 -- nvmf/common.sh@297 -- # mlx=() 00:31:48.191 03:19:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:48.191 03:19:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.191 03:19:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:48.191 03:19:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:48.191 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:48.191 03:19:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:48.191 03:19:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:48.191 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:48.191 03:19:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:48.191 03:19:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.191 03:19:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.191 03:19:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:48.191 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:48.191 03:19:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:48.191 03:19:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.191 03:19:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.191 03:19:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:48.191 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:48.191 03:19:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:48.191 03:19:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:48.191 03:19:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:48.191 03:19:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.191 03:19:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.191 03:19:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:48.191 03:19:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.191 03:19:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.191 03:19:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:48.191 03:19:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.191 03:19:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.191 03:19:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:48.191 03:19:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:48.191 03:19:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.191 03:19:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.191 03:19:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.191 03:19:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.191 03:19:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:48.191 03:19:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.191 03:19:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.191 03:19:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.191 03:19:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:48.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:31:48.191 00:31:48.191 --- 10.0.0.2 ping statistics --- 00:31:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.191 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:31:48.191 03:19:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:31:48.191 00:31:48.191 --- 10.0.0.1 ping statistics --- 00:31:48.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.191 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:31:48.191 03:19:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.191 03:19:43 -- nvmf/common.sh@410 -- # return 0 00:31:48.191 03:19:43 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:48.191 03:19:43 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:49.565 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:49.565 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:49.565 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:50.499 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:31:50.758 03:19:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:50.758 03:19:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:50.758 03:19:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:50.758 03:19:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:50.758 03:19:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:50.758 03:19:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:50.758 03:19:45 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:31:50.758 03:19:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:50.758 03:19:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:50.758 03:19:45 -- common/autotest_common.sh@10 -- # set +x 00:31:50.758 03:19:45 -- nvmf/common.sh@469 -- # nvmfpid=2153287 00:31:50.758 03:19:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:50.758 03:19:45 -- nvmf/common.sh@470 -- # waitforlisten 2153287 00:31:50.758 03:19:45 -- common/autotest_common.sh@819 -- # '[' -z 2153287 ']' 00:31:50.758 03:19:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:50.758 03:19:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:50.758 03:19:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:50.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:50.758 03:19:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:50.758 03:19:45 -- common/autotest_common.sh@10 -- # set +x 00:31:50.758 [2024-07-14 03:19:45.821502] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:31:50.758 [2024-07-14 03:19:45.821571] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:50.758 EAL: No free 2048 kB hugepages reported on node 1 00:31:50.758 [2024-07-14 03:19:45.887775] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:50.758 [2024-07-14 03:19:45.977811] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:50.758 [2024-07-14 03:19:45.977995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:50.758 [2024-07-14 03:19:45.978015] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:50.758 [2024-07-14 03:19:45.978030] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:50.758 [2024-07-14 03:19:45.978090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:50.758 [2024-07-14 03:19:45.978144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:50.758 [2024-07-14 03:19:45.978262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:50.758 [2024-07-14 03:19:45.978264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:51.690 03:19:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:51.690 03:19:46 -- common/autotest_common.sh@852 -- # return 0 00:31:51.690 03:19:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:51.690 03:19:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:51.690 03:19:46 -- common/autotest_common.sh@10 -- # set +x 00:31:51.690 03:19:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:31:51.690 03:19:46 -- scripts/common.sh@311 -- # local bdf bdfs 00:31:51.690 03:19:46 -- scripts/common.sh@312 -- # local nvmes 00:31:51.690 03:19:46 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:31:51.690 03:19:46 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:51.690 03:19:46 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:31:51.690 03:19:46 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:31:51.690 03:19:46 -- scripts/common.sh@322 -- # uname -s 00:31:51.690 03:19:46 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:31:51.690 03:19:46 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:31:51.690 03:19:46 -- scripts/common.sh@327 -- # (( 1 )) 00:31:51.690 03:19:46 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:31:51.690 03:19:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:51.690 03:19:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:51.690 03:19:46 -- common/autotest_common.sh@10 -- # set +x 00:31:51.690 ************************************ 00:31:51.690 START TEST spdk_target_abort 00:31:51.690 ************************************ 00:31:51.690 03:19:46 -- common/autotest_common.sh@1104 -- # spdk_target 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:51.690 03:19:46 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:31:51.690 03:19:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:51.690 03:19:46 -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 spdk_targetn1 00:31:54.969 03:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:54.969 03:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.969 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 [2024-07-14 03:19:49.603902] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:54.969 03:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:31:54.969 03:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.969 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 03:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:31:54.969 03:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.969 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 03:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:31:54.969 03:19:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:54.969 03:19:49 -- common/autotest_common.sh@10 -- # set +x 00:31:54.969 [2024-07-14 03:19:49.636202] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:54.969 03:19:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:54.969 03:19:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:54.969 EAL: No free 2048 kB hugepages reported on node 1 00:31:57.529 Initializing NVMe Controllers 00:31:57.529 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:57.529 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:57.529 Initialization complete. Launching workers. 00:31:57.529 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10287, failed: 0 00:31:57.529 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1241, failed to submit 9046 00:31:57.529 success 845, unsuccess 396, failed 0 00:31:57.529 03:19:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:57.529 03:19:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:57.787 EAL: No free 2048 kB hugepages reported on node 1 00:32:01.062 [2024-07-14 03:19:55.982897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e09660 is same with the state(5) to be set 00:32:01.062 Initializing NVMe Controllers 00:32:01.062 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:01.062 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:01.062 Initialization complete. Launching workers. 00:32:01.062 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8584, failed: 0 00:32:01.062 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1204, failed to submit 7380 00:32:01.062 success 355, unsuccess 849, failed 0 00:32:01.062 03:19:56 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:01.062 03:19:56 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:01.062 EAL: No free 2048 kB hugepages reported on node 1 00:32:04.342 Initializing NVMe Controllers 00:32:04.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:04.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:04.342 Initialization complete. Launching workers. 00:32:04.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 31935, failed: 0 00:32:04.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2735, failed to submit 29200 00:32:04.342 success 533, unsuccess 2202, failed 0 00:32:04.342 03:19:59 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:04.342 03:19:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.342 03:19:59 -- common/autotest_common.sh@10 -- # set +x 00:32:04.342 03:19:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:04.342 03:19:59 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:04.342 03:19:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:04.342 03:19:59 -- common/autotest_common.sh@10 -- # set +x 00:32:05.733 03:20:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.733 03:20:00 -- target/abort_qd_sizes.sh@62 -- # killprocess 2153287 00:32:05.733 03:20:00 -- common/autotest_common.sh@926 -- # '[' -z 2153287 ']' 00:32:05.733 03:20:00 -- common/autotest_common.sh@930 -- # kill -0 2153287 00:32:05.733 03:20:00 -- common/autotest_common.sh@931 -- # uname 00:32:05.733 03:20:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:05.733 03:20:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2153287 00:32:05.733 03:20:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:05.733 03:20:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:05.733 03:20:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2153287' 00:32:05.733 killing process with pid 2153287 00:32:05.733 03:20:00 -- common/autotest_common.sh@945 -- # kill 2153287 00:32:05.733 03:20:00 -- common/autotest_common.sh@950 -- # wait 2153287 00:32:05.733 00:32:05.733 real 0m14.097s 00:32:05.733 user 0m55.647s 00:32:05.733 sys 0m2.684s 00:32:05.733 03:20:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.733 03:20:00 -- common/autotest_common.sh@10 -- # set +x 00:32:05.733 ************************************ 00:32:05.733 END TEST spdk_target_abort 00:32:05.733 ************************************ 00:32:05.733 03:20:00 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:05.733 03:20:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:05.733 03:20:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:05.733 03:20:00 -- common/autotest_common.sh@10 -- # set +x 00:32:05.733 ************************************ 00:32:05.733 START TEST kernel_target_abort 00:32:05.733 ************************************ 00:32:05.733 03:20:00 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:05.733 03:20:00 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:05.733 03:20:00 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:05.733 03:20:00 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:05.733 03:20:00 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:05.733 03:20:00 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:05.733 03:20:00 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:05.734 03:20:00 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:05.734 03:20:00 -- nvmf/common.sh@627 -- # local block nvme 00:32:05.734 03:20:00 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:05.734 03:20:00 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:05.734 03:20:00 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:05.734 03:20:00 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:07.126 Waiting for block devices as requested 00:32:07.126 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:07.126 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:07.126 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:07.384 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:07.384 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:07.384 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:07.384 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:07.384 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:07.642 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:07.642 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:07.642 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:07.642 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:07.900 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:07.900 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:07.900 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:08.157 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:08.157 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:08.157 03:20:03 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:08.157 03:20:03 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:08.157 03:20:03 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:08.157 03:20:03 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:08.157 03:20:03 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:08.157 No valid GPT data, bailing 00:32:08.157 03:20:03 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:08.157 03:20:03 -- scripts/common.sh@393 -- # pt= 00:32:08.157 03:20:03 -- scripts/common.sh@394 -- # return 1 00:32:08.157 03:20:03 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:08.157 03:20:03 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:08.157 03:20:03 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:08.157 03:20:03 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:08.415 03:20:03 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:08.415 03:20:03 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:08.415 03:20:03 -- nvmf/common.sh@654 -- # echo 1 00:32:08.415 03:20:03 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:08.415 03:20:03 -- nvmf/common.sh@656 -- # echo 1 00:32:08.415 03:20:03 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:08.415 03:20:03 -- nvmf/common.sh@663 -- # echo tcp 00:32:08.415 03:20:03 -- nvmf/common.sh@664 -- # echo 4420 00:32:08.415 03:20:03 -- nvmf/common.sh@665 -- # echo ipv4 00:32:08.415 03:20:03 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:08.415 03:20:03 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:08.415 00:32:08.415 Discovery Log Number of Records 2, Generation counter 2 00:32:08.415 =====Discovery Log Entry 0====== 00:32:08.415 trtype: tcp 00:32:08.415 adrfam: ipv4 00:32:08.415 subtype: current discovery subsystem 00:32:08.415 treq: not specified, sq flow control disable supported 00:32:08.415 portid: 1 00:32:08.415 trsvcid: 4420 00:32:08.415 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:08.415 traddr: 10.0.0.1 00:32:08.415 eflags: none 00:32:08.415 sectype: none 00:32:08.415 =====Discovery Log Entry 1====== 00:32:08.415 trtype: tcp 00:32:08.415 adrfam: ipv4 00:32:08.415 subtype: nvme subsystem 00:32:08.415 treq: not specified, sq flow control disable supported 00:32:08.415 portid: 1 00:32:08.415 trsvcid: 4420 00:32:08.415 subnqn: kernel_target 00:32:08.415 traddr: 10.0.0.1 00:32:08.415 eflags: none 00:32:08.415 sectype: none 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:08.415 03:20:03 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:08.415 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.691 Initializing NVMe Controllers 00:32:11.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:11.691 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:11.691 Initialization complete. Launching workers. 00:32:11.691 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 27966, failed: 0 00:32:11.691 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 27966, failed to submit 0 00:32:11.691 success 0, unsuccess 27966, failed 0 00:32:11.691 03:20:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:11.691 03:20:06 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:11.691 EAL: No free 2048 kB hugepages reported on node 1 00:32:14.970 Initializing NVMe Controllers 00:32:14.970 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:14.970 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:14.970 Initialization complete. Launching workers. 00:32:14.970 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 57282, failed: 0 00:32:14.970 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14418, failed to submit 42864 00:32:14.970 success 0, unsuccess 14418, failed 0 00:32:14.970 03:20:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:14.970 03:20:09 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:14.970 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.260 Initializing NVMe Controllers 00:32:18.260 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:18.260 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:18.260 Initialization complete. Launching workers. 00:32:18.260 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 56174, failed: 0 00:32:18.260 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14014, failed to submit 42160 00:32:18.260 success 0, unsuccess 14014, failed 0 00:32:18.260 03:20:12 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:18.260 03:20:12 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:18.260 03:20:12 -- nvmf/common.sh@677 -- # echo 0 00:32:18.260 03:20:12 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:18.260 03:20:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:18.260 03:20:12 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:18.260 03:20:12 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:18.260 03:20:12 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:18.260 03:20:12 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:18.260 00:32:18.260 real 0m11.995s 00:32:18.260 user 0m4.003s 00:32:18.260 sys 0m2.575s 00:32:18.260 03:20:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:18.260 03:20:12 -- common/autotest_common.sh@10 -- # set +x 00:32:18.260 ************************************ 00:32:18.260 END TEST kernel_target_abort 00:32:18.260 ************************************ 00:32:18.260 03:20:12 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:18.260 03:20:12 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:18.260 03:20:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:18.260 03:20:12 -- nvmf/common.sh@116 -- # sync 00:32:18.260 03:20:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:18.260 03:20:12 -- nvmf/common.sh@119 -- # set +e 00:32:18.260 03:20:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:18.260 03:20:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:18.260 rmmod nvme_tcp 00:32:18.260 rmmod nvme_fabrics 00:32:18.260 rmmod nvme_keyring 00:32:18.260 03:20:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:18.260 03:20:12 -- nvmf/common.sh@123 -- # set -e 00:32:18.260 03:20:12 -- nvmf/common.sh@124 -- # return 0 00:32:18.260 03:20:12 -- nvmf/common.sh@477 -- # '[' -n 2153287 ']' 00:32:18.260 03:20:12 -- nvmf/common.sh@478 -- # killprocess 2153287 00:32:18.260 03:20:12 -- common/autotest_common.sh@926 -- # '[' -z 2153287 ']' 00:32:18.260 03:20:12 -- common/autotest_common.sh@930 -- # kill -0 2153287 00:32:18.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2153287) - No such process 00:32:18.260 03:20:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2153287 is not found' 00:32:18.260 Process with pid 2153287 is not found 00:32:18.260 03:20:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:18.260 03:20:12 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:18.826 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:18.826 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:18.826 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:18.826 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:18.826 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:19.083 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:19.083 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:19.083 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:19.083 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:19.083 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:19.083 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:19.083 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:19.083 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:19.083 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:19.083 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:19.083 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:19.083 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:19.083 03:20:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:19.083 03:20:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:19.083 03:20:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:19.084 03:20:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:19.084 03:20:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:19.084 03:20:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:19.084 03:20:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:21.617 03:20:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:21.617 00:32:21.617 real 0m34.804s 00:32:21.617 user 1m1.862s 00:32:21.617 sys 0m8.372s 00:32:21.617 03:20:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:21.617 03:20:16 -- common/autotest_common.sh@10 -- # set +x 00:32:21.617 ************************************ 00:32:21.617 END TEST nvmf_abort_qd_sizes 00:32:21.617 ************************************ 00:32:21.617 03:20:16 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:21.617 03:20:16 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:21.617 03:20:16 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:21.617 03:20:16 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:21.617 03:20:16 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:21.617 03:20:16 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:21.617 03:20:16 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:21.617 03:20:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:21.617 03:20:16 -- common/autotest_common.sh@10 -- # set +x 00:32:21.617 03:20:16 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:21.617 03:20:16 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:21.617 03:20:16 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:21.617 03:20:16 -- common/autotest_common.sh@10 -- # set +x 00:32:22.994 INFO: APP EXITING 00:32:22.994 INFO: killing all VMs 00:32:22.994 INFO: killing vhost app 00:32:22.994 INFO: EXIT DONE 00:32:23.930 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:24.187 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:24.187 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:24.187 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:24.187 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:24.187 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:24.187 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:24.187 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:24.187 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:24.187 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:24.187 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:24.187 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:24.187 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:24.187 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:24.187 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:24.187 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:24.187 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:25.564 Cleaning 00:32:25.564 Removing: /var/run/dpdk/spdk0/config 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:25.564 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:25.564 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:25.564 Removing: /var/run/dpdk/spdk1/config 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:25.564 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:25.564 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:25.564 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:25.564 Removing: /var/run/dpdk/spdk2/config 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:25.564 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:25.564 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:25.564 Removing: /var/run/dpdk/spdk3/config 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:25.564 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:25.564 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:25.564 Removing: /var/run/dpdk/spdk4/config 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:25.564 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:25.564 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:25.564 Removing: /dev/shm/bdev_svc_trace.1 00:32:25.564 Removing: /dev/shm/nvmf_trace.0 00:32:25.564 Removing: /dev/shm/spdk_tgt_trace.pid1878201 00:32:25.564 Removing: /var/run/dpdk/spdk0 00:32:25.564 Removing: /var/run/dpdk/spdk1 00:32:25.564 Removing: /var/run/dpdk/spdk2 00:32:25.564 Removing: /var/run/dpdk/spdk3 00:32:25.564 Removing: /var/run/dpdk/spdk4 00:32:25.564 Removing: /var/run/dpdk/spdk_pid1876508 00:32:25.564 Removing: /var/run/dpdk/spdk_pid1877262 00:32:25.564 Removing: /var/run/dpdk/spdk_pid1878201 00:32:25.564 Removing: /var/run/dpdk/spdk_pid1878683 00:32:25.564 Removing: /var/run/dpdk/spdk_pid1879909 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1880864 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1881050 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1881373 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1881584 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1881903 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1882062 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1882222 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1882403 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1882991 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1886013 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1886326 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1886500 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1886638 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1887073 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1887212 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1887525 00:32:25.565 Removing: /var/run/dpdk/spdk_pid1887666 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1887834 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1887976 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1888146 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1888290 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1888654 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1888810 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889089 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889310 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889335 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889516 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889658 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889817 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1889959 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1890243 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1890385 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1890540 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1890684 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1890963 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891112 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891267 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891405 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891689 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891833 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1891994 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1892134 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1892412 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1892562 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1892717 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1892861 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1893135 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1893283 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1893442 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1893583 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1893841 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894010 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894164 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894312 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894543 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894738 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1894893 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1895033 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1895311 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1895461 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1895626 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1895769 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896050 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896198 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896355 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896501 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896774 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1896848 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1897053 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1899239 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1954717 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1957374 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1964349 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1967700 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1970220 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1970666 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1974509 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1974511 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1975252 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1975906 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977043 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977456 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977464 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977726 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977760 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1977862 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1978424 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1979102 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1979787 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1980198 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1980209 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1980352 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1981453 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1982272 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1987865 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1988153 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1990707 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1994466 00:32:25.823 Removing: /var/run/dpdk/spdk_pid1996707 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2003195 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2008978 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2010560 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2011240 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2021600 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2023841 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2026668 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2027887 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2029273 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2029524 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2029681 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2029834 00:32:25.823 Removing: /var/run/dpdk/spdk_pid2030415 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2031906 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2032802 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2033247 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2036757 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2040285 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2044476 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2068130 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2071463 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2075427 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2076413 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2077535 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2080232 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2082633 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2087005 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2087015 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2089966 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2090106 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2090243 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2090515 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2090600 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2091755 00:32:25.824 Removing: /var/run/dpdk/spdk_pid2092972 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2094189 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2095403 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2096627 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2097844 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2101754 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2102277 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2104124 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2104887 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2108671 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2110716 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2114330 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2117955 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2121505 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2121921 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2122391 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2122880 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2123476 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2123907 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2124458 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2125009 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2127648 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2127814 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2131671 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2131857 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2133495 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2139264 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2139277 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2142349 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2143668 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2145202 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2145995 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2147437 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2148212 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2153727 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2154130 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2154530 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2156013 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2156426 00:32:26.082 Removing: /var/run/dpdk/spdk_pid2156838 00:32:26.082 Clean 00:32:26.082 killing process with pid 1848946 00:32:34.219 killing process with pid 1848943 00:32:34.219 killing process with pid 1848945 00:32:34.477 killing process with pid 1848944 00:32:34.478 03:20:29 -- common/autotest_common.sh@1436 -- # return 0 00:32:34.478 03:20:29 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:34.478 03:20:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:34.478 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:32:34.478 03:20:29 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:34.478 03:20:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:34.478 03:20:29 -- common/autotest_common.sh@10 -- # set +x 00:32:34.478 03:20:29 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:34.478 03:20:29 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:34.478 03:20:29 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:34.478 03:20:29 -- spdk/autotest.sh@394 -- # hash lcov 00:32:34.478 03:20:29 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:34.478 03:20:29 -- spdk/autotest.sh@396 -- # hostname 00:32:34.478 03:20:29 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:34.736 geninfo: WARNING: invalid characters removed from testname! 00:33:01.269 03:20:55 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:04.555 03:20:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:07.838 03:21:02 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:10.375 03:21:05 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:13.719 03:21:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.245 03:21:11 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.521 03:21:14 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:19.521 03:21:14 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.521 03:21:14 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:19.521 03:21:14 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.521 03:21:14 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.521 03:21:14 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.522 03:21:14 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.522 03:21:14 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.522 03:21:14 -- paths/export.sh@5 -- $ export PATH 00:33:19.522 03:21:14 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.522 03:21:14 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:19.522 03:21:14 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:19.522 03:21:14 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720920074.XXXXXX 00:33:19.522 03:21:14 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720920074.yljmQT 00:33:19.522 03:21:14 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:19.522 03:21:14 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:33:19.522 03:21:14 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:19.522 03:21:14 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:19.522 03:21:14 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:19.522 03:21:14 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:19.522 03:21:14 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:19.522 03:21:14 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:19.522 03:21:14 -- common/autotest_common.sh@10 -- $ set +x 00:33:19.522 03:21:14 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:19.522 03:21:14 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:19.522 03:21:14 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.522 03:21:14 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:19.522 03:21:14 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:19.522 03:21:14 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:19.522 03:21:14 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:19.522 03:21:14 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:19.522 03:21:14 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:19.522 03:21:14 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:19.522 03:21:14 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:19.522 + [[ -n 1794156 ]] 00:33:19.522 + sudo kill 1794156 00:33:19.532 [Pipeline] } 00:33:19.551 [Pipeline] // stage 00:33:19.556 [Pipeline] } 00:33:19.573 [Pipeline] // timeout 00:33:19.578 [Pipeline] } 00:33:19.595 [Pipeline] // catchError 00:33:19.599 [Pipeline] } 00:33:19.617 [Pipeline] // wrap 00:33:19.622 [Pipeline] } 00:33:19.638 [Pipeline] // catchError 00:33:19.646 [Pipeline] stage 00:33:19.648 [Pipeline] { (Epilogue) 00:33:19.661 [Pipeline] catchError 00:33:19.663 [Pipeline] { 00:33:19.677 [Pipeline] echo 00:33:19.679 Cleanup processes 00:33:19.685 [Pipeline] sh 00:33:19.969 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.969 2169578 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:19.984 [Pipeline] sh 00:33:20.267 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:20.267 ++ grep -v 'sudo pgrep' 00:33:20.267 ++ awk '{print $1}' 00:33:20.267 + sudo kill -9 00:33:20.267 + true 00:33:20.278 [Pipeline] sh 00:33:20.559 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:30.535 [Pipeline] sh 00:33:30.826 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:30.826 Artifacts sizes are good 00:33:30.842 [Pipeline] archiveArtifacts 00:33:30.849 Archiving artifacts 00:33:31.077 [Pipeline] sh 00:33:31.366 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:31.386 [Pipeline] cleanWs 00:33:31.398 [WS-CLEANUP] Deleting project workspace... 00:33:31.399 [WS-CLEANUP] Deferred wipeout is used... 00:33:31.406 [WS-CLEANUP] done 00:33:31.407 [Pipeline] } 00:33:31.425 [Pipeline] // catchError 00:33:31.438 [Pipeline] sh 00:33:31.719 + logger -p user.info -t JENKINS-CI 00:33:31.728 [Pipeline] } 00:33:31.745 [Pipeline] // stage 00:33:31.751 [Pipeline] } 00:33:31.768 [Pipeline] // node 00:33:31.774 [Pipeline] End of Pipeline 00:33:31.811 Finished: SUCCESS